# Planet Octave

## May 28, 2020

### Abdallah Khaled Elshamy

#### abdallahkelshamy

Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

###### That’s what is done
• I finished my experiments with RapidJSON. I also finished reading the documentation of the library. A small example I did is to make an Octave function (written in C++) that adds two JSON objects. JSON objects are a set of key-value pairs. This function accepts objects with numeric values only and adds these values if they have the same key. Else, the values remain the same. This small function serves as a good warm up before the coding period. I think this is a good warm up as it uses two important things for my project: Creating Octave functions that are written in C++ and using RapidJSON which I will use in the project. enough talking here is the code:
#include "rapidjson/document.h"
#include "rapidjson/writer.h"
#include "rapidjson/stringbuffer.h"
#include "rapidjson/error/en.h"
#include <octave/oct.h>

using namespace rapidjson;

{
int nargin = args.length ();

if (args.length () != 2)
print_usage ();

if (! (args(0).is_string () && args(1).is_string ()))
error ("parameters must be Character Strings");

std::string first_json = args(0).string_value ();
std::string second_json = args(1).string_value ();
Document d1;
Document d2;

d1.Parse (first_json.c_str ());
if (d1.HasParseError ())
error("(offset %u): %s\n",
(unsigned)d1.GetErrorOffset(),
GetParseError_En(d1.GetParseError()));

d2.Parse (second_json.c_str ());
if (d2.HasParseError ())
error("(offset %u): %s\n",
(unsigned)d2.GetErrorOffset(),
GetParseError_En(d2.GetParseError()));

if(! (d1.IsObject () && d2.IsObject ()))
error ("parameters must be JSON objects");

// checking that the first json object has numeric values only
for (Value::ConstMemberIterator itr = d1.MemberBegin ();
itr != d1.MemberEnd () ; ++itr)
{
if (! itr->value.IsNumber ())
error ("values must be numbers");
}

for (Value::ConstMemberIterator itr = d2.MemberBegin ();
itr != d2.MemberEnd () ; ++itr)
{
if (! itr->value.IsNumber ())
error ("values must be numbers");
if (d1.HasMember (itr->name.GetString ()))
{
Value& s = d1[itr->name.GetString ()];
if (s.IsDouble () || itr->value.IsDouble ())
s.SetDouble(s.GetDouble() + itr->value.GetDouble ());
else
s.SetInt(s.GetInt () + itr->value.GetInt ());
}
else
{
Value key(itr->name.GetString (), d1.GetAllocator ());
Value value (itr->value, d1.GetAllocator ());
}
}
StringBuffer buffer;
Writer<StringBuffer> writer (buffer);
d1.Accept (writer);

return octave_value (buffer.GetString());
}

• I discovered this cool Octave command __run_test_suite__. This command runs the complete test suite of Octave (the one that gets run at the end of make check.) This is very useful for regression testing.
• I also prepared my check list for the test suite. My goal here is to make the test suite covers all the conversion cases that jsonencode and jsondecode cover in the official documentation of MATLAB (E.g. from boolean JSON data type to scalar logical) so my check list is simply the conversions listed at the end of the documentation of both functions (posting them here will overpopulate the post.)
###### Timeline and Milestones

Since the coding will start next week. This is a good time to show you my plan for the project. Those are my milestones:

• 26/6: Deliver test suite (first evaluation period starts on 29/6)
• 20/7: Deliver jsondecode (second evaluation period starts on 27/7)
• 05/8: Deliver jsonencode (final week starts on 24/8)

Here is my timeline:

 From-To Duration Task Hours/Week 01/6 – 21/6* (final exams) 20 days Preparing the test suite 7-10 21/6 – 03/7 12 days Finalizing the test suite, running tests on the libraries and Creating reliable figures. 40-45 03/7 – 06/7 3 days Analyzing results and taking design decisions with the mentors. 40-45 06/7 – 18/7 12 days Implementing jsondecode 40-45 18/7 – 20/7 2 days Buffering 40-45 20/7 – 03/8 14 days Implementing jsonencode 40-45 03/8 – 07/8 4 days Buffering & Documenting 40-45 07/8 – 12/8 5 days Converting the test suite to Octave BIST 40-45 12/8 – 17/8 5 days Cleaning the code and preparing the patch 40-45 17/8 – 31/8 14 days Perfecting the patch with the community feedback. 40-45
My timeline
###### What I intend to do

That’s it for this week. See you next one.

## May 21, 2020

### Abdallah Khaled Elshamy

#### abdallahkelshamy

Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

###### That’s what is done
• I finished my experiments with Oct-Files by working with a simple example that makes some checks on the input , generates some errors and manipulates a struct inside the function.
• I refreshed my knowledge on shell scripting using this tutorial, Here is some useful info:
• grep -r : This option is used to recursively search for a pattern. This was useful for me as it showed me where is the macro OCTAVE_CHECK_LIB so I can know its job.
• which : An awesome feature of Octave that it implements its own version of “which” command. “which” command in Octave shows the file that contains a specific function.
• A cool best practice I learned is using the backtick to improve performance if you want to run a set of commands and parse various bits of its output:
find / -name "*.html" -print | grep "/index.html$" find / -name "*.html" -print | grep "/contents.html$"

This code could take a long time to run, and we are doing it twice!
A better solution is:

HTML_FILES=find / -name "*.html" -print
echo "$HTML_FILES" | grep "/index.html$"
echo "$HTML_FILES" | grep "/contents.html$"
• I got more familiar with GNU Autotools.
###### What I intend to do
• Extend configure.ac file to check for RapidJSON after some discussions on the mailing list about which macros to use and some build options.
• Finish my experiments with RapidJSON.
• Describe in details the parts of the test suite.
• Find out how to do regression testing.

That’s it for this week. See you next one.

## May 14, 2020

### Abdallah Khaled Elshamy

#### abdallahkelshamy

Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

###### That’s what is done
• I read GSoC student guide.
• I set up my public blog.
• I will be using the GitHub mirror of Octave instead of using a mercurial repo so, I set up my public repo and prepared my local environment.
• The decision on how to add RapidJSON library to Octave was discussed and made on the mailing list.
• I started reading about and experimenting with Oct-Files to get familiar with the code base.
###### What I intend to do
• Get more familiar with GNU Autotools.
• Extend configure.ac file to check for RapidJSON.
• Getting familiar with RapidJSON.
• Finish my experiments with Oct-Files.

That’s it for this week. See you next one.

## February 20, 2020

### Nir

#### Octave in GSoC 2020

Octave is a mentor organization for Google Summer of Code this year. Applications from students are due by March 31. See the Octave wiki for tips on applying.

## March 04, 2019

### Nir

#### Google Summer of Code 2019: Call for Coders

Octave is in GSoC this year, for our fifth time as an independent organization!

Student applications for the paid summer internships are due 9 April.

Check out the Wiki for potential projects and application instructions.

## February 24, 2019

### Jordi Gutiérrez Hermoso

#### Exercising software freedom on Firefox

I’m a little unusual. I use Emacs.

That alone is unusual. But I get the impression that even amongst Emacs users, I’m in the minority in another way: I use the default keybindings. I love them. A lot of new Emacs users seem to insist on jamming vim keys into Emacs, but not me. These are my friends: C-p C-n C-f C-b C-a C-e C-k; down up left right start end kill.

I’m so gung-ho about Emacs keybindings that I made them the default keybinding of GTK+, which means that any application that uses GTK+ will respect Emacs keybindings for motion. They also work in anything that uses readline or readline-like input, like bash, python, or psql (postgresql’s default CLI client). Being used to Emacs keys has paid off for me. I have a consistent interface across the software that matters to me.

I’m becoming a minority in another way: I use Firefox. And Firefox uses GTK+. That means I can use Emacs keybindings in Firefox.

Ah, but there’s a rub. Firefox binds C-n (or as most people would call it, “ctrl-n”) to new window. This is probably okay for people who don’t have the intersectionality of Emacs keybindings everywhere and Firefox. But for me, it’s intolerable. If I want to move a cursor down, I have to instead perform a very unnatural-feeling motion of moving my right hand to the arrow keys and hit the arrow down button. For those accostumed to using arrow keys, imagine if every time you pressed the down arrow Firefox would open a new window. Imagine software reacting so at odds to your habituation.

Up until Firefox 56 there was an easy workaround. You could download extensions that would let you configure Firefox’s keyboard shorcuts, including disable some of them. I used to do this. The world, however, marches on and so does Firefox. Many extensions cannot do what they once did and the easy fix was gone.

I tried to cope, for a while. After all, it’s just one key. I can still use the arrow keys. I tried.

But no. It wouldn’t work. I couldn’t help myself. I often wanted to move the cursor down three or four rows and would accidentally open up three or four new windows. It was even worse because I could move in every other direction and it all felt natural, but if I made the mistake of going down, the software would react in the wrong way. Everything else did it right except Firefox. And one day, I had enough.

# Software Freedom

Enough was enough. I had accidentally opened a new window for the last time. I want to go down, you donut! And you won’t stop me anymore!

FREEDOMMM!

I had the motivation. I have some skill. We can rebuild Firefox. Make it better. More consistent. We have the technology.

I didn’t want to get involved in Firefox’s build drama, though. I didn’t want to figure out how to clone its repo, how to setup a development environment, how to configure the build, what kinds of builds there are, and how to integrate all of this with my operating system. Luckily, someone else has already done all of this work for me: the Debian packagers.

A Debian package knows what dependencies are required to build a package and has all of the tooling ready to build that package and make it fit exactly with my operating system. Right system libraries, right compilation options, everything. I know how to build Debian packages:

1. Get the source (apt-get source $packagename) 2. Get the dependencies (sudo apt build-dep$packagename)
3. Build the package (dpkg-buildpackage)

Easy enough.

# Firefox, the behemoth

As I started following the steps above, something was immediately evident. Firefox is huge. Enormous. Gargantuan. The biggest codebase I have ever seen. At a glance I saw a mix of Python, C++, Rust, and XML which I later came to recognise as XUL (“XUL?” I hear you ask. Yes. XUL. More on this below.) I can see why few dare tread in here.

I, on the other hand, with my motivation going strong, felt undaunted. I would tame The Beast of oxidised metal.

But I wouldn’t do it alone. I know that the Mozilla project still has a fairly active IRC network over at irc.mozilla.org, so I headed down that way. I started talking about my problem, asking for advice. While I waited for replies, I tried to do it on my own. I figured, GTK+, keybindings, C. I was looking for some C or C++ source file that would define the GTK+ keybindings. I would find this file and destroy the keybinding. I have done something similar in the past for other GTK+ programs.

My solo search proved unfruitful. I couldn’t find anything about new window in C++ source files. I even tried the Rust files, maybe they’ve done something there, but again nothing. My grepping did find new window commands in XML files, but I figured that couldn’t still be of use. Everyone knows it, it’s all over the software news: Firefox disabled XUL as part of its move to a Rust engine.

In the meantime, helpful people from IRC pushed me along my quest and pointed me in the right direction. Yes, XUL is all I needed.

# There is no Rust. There is only XUL!

Yep! Firefox has been lying to us! It’s still all XUL. All they’ve disabled is the external interface for extensions, but under the hood, Firefox is still the XUL mess it always was. They say they’re ripping it out, yet the process seems slow.

So I followed the advice. I changed a single XML file. I built the Debian package. I was expecting a long compilation time and I got it. I was worried I wouldn’t have enough RAM for the build, but looks like 16 gigabytes with four cores (Thinkpad X1 Carbon 5th gen) was enough. People in IRC reassured me that it would take about two hours. They were right! Two hours later, I had a new Firefox in a neat little Debian package. I installed it (dpkg -i *.deb) eager to see the results and…

XML parsing error. Undefined entity.

Oh no! I had made a mistake! All I could do was close this error window. Firefox just wouldn’t start.

However, this confirmed two things. One, the XUL really is still being used. In fact, it’s so important that Firefox won’t even start if you get it wrong. And two… I was on the right track. Modifying XUL could very well get me to my goal of disabling one key.

The error window reminded me a lot of similar errors I had seen in the past when XUL was available to 3rd party extension authors. It seems that not as much as advertised has changed.

XUL parsing error

I tried again. I had removed the key but I hadn’t removed a few references to that key. Another build. Another two hours. In the meantime, Mozilla employees and enthusiasts in IRC kept asking me if I was doing an artifact build. I said no, that I wanted to learn as little as possible about Firefox’s build process. Turns out that an artifact build is an interesting thing where you download pre-built Firefox components and the build just puts them together, greatly reducing the compilation times.

I had the very specific goals of building a Debian package and not wanting to get too involved in build drama, so I politely refused the suggestions of artifact builds.

I just want my cursor to move down, man.

My second try also didn’t work. I had neglected one further reference to the new window key. I didn’t think it was necessary, but the XML again failed to parse because the key for undoing closing a window is defined in terms of the key for opening a new window. I decided that if I wasn’t going to be opening new windows, I also wasn’t going to undo close them, so I also deleted this reference.

By now it was getting late, I had to sleep, and I couldn’t wait for another two-hour build. I made the change, started the build, and went to bed like a kid excited for Christmas morning.

# Free at last!

The morning came. My new build was ready. I installed the third Debian package I built.

This time Firefox started. No more XML errors.

Could it be…?

I went to the first website I could think of that had a textarea element I could try to type in, paste.debian.net.

I typed some text. I hit enter a few times. I pressed C-p to go back up.

The moment of truth!

I hit C-n.

No new window.

The cursor moved down.

YES!!

Great success!

# The patch

So here’s the patch, for anyone else who wants it. I made it against ESR (currently Firefox 60) because that’s what’s packaged for Debian stable, but all of these modified files are still there in the current Mercurial repository I just checked right now.

@@ -27,7 +27,6 @@
label="&newNavigatorCmd.label;"
accesskey="&newNavigatorCmd.accesskey;"
-                          key="key_newNavigator"
command="cmd_newNavigator"/>
label="&newPrivateWindow.label;"
diff --git a/firefox-esr-60.5.1esr/browser/base/content/browser-sets.inc b/firefox-esr-60.5.1esr/browser/base/content/browser-sets.inc
--- a/firefox-esr-60.5.1esr/browser/base/content/browser-sets.inc
+++ b/firefox-esr-60.5.1esr/browser/base/content/browser-sets.inc
@@ -196,10 +196,6 @@

<keyset id="mainKeyset">
-    <key id="key_newNavigator"
-         key="&newNavigatorCmd.key;"
-         command="cmd_newNavigator"
-         modifiers="accel" reserved="true"/>
<key id="key_newNavigatorTab" key="&tabCmd.commandkey;" modifiers="accel"
command="cmd_newNavigatorTabNoEvent" reserved="true"/>
<key id="focusURLBar" key="&openCmd.commandkey;" command="Browser:OpenLocation"
@@ -378,7 +374,6 @@
#ifdef FULL_BROWSER_WINDOW
<key id="key_undoCloseTab" command="History:UndoCloseTab" key="&tabCmd.commandkey;" modifiers="accel,shift"/>
#endif
-    <key id="key_undoCloseWindow" command="History:UndoCloseWindow" key="&newNavigatorCmd.key;" modifiers="accel,shift"/>

#ifdef XP_GNOME
#define NUM_SELECT_TAB_MODIFIER alt
diff --git a/firefox-esr-60.5.1esr/browser/components/customizableui/content/panelUI.inc.xul b/firefox-esr-60.5.1esr/browser/components/customizableui/content/panelUI.inc.xul
--- a/firefox-esr-60.5.1esr/browser/components/customizableui/content/panelUI.inc.xul
+++ b/firefox-esr-60.5.1esr/browser/components/customizableui/content/panelUI.inc.xul
@@ -205,7 +205,6 @@
class="subviewbutton subviewbutton-iconic"
label="&newNavigatorCmd.label;"
-                       key="key_newNavigator"
command="cmd_newNavigator"/>
class="subviewbutton subviewbutton-iconic"
diff --git a/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd b/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd
--- a/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd
+++ b/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd
@@ -298,7 +298,6 @@ These should match what Safari and other
<!ENTITY newUserContext.label             "New Container Tab">
<!ENTITY newUserContext.accesskey         "B">
<!ENTITY newNavigatorCmd.label        "New Window">
-<!ENTITY newNavigatorCmd.key        "N">
<!ENTITY newNavigatorCmd.accesskey      "N">
<!ENTITY newPrivateWindow.label     "New Private Window">
<!ENTITY newPrivateWindow.accesskey "W">

So there you have it. You can still alter Firefox’s XUL. You just have to compile it in instead of doing an extension.

## February 21, 2019

### Jordi Gutiérrez Hermoso

#### To Translate Is To Lie, So Weave A Good Yarn

I’m not a professional translator, but I know what I like in fiction.

When I was a Mexican kid in the 1980s we used to get old re-runs of the Flintstones in Spanish. Of course, my English wasn’t very good when I was very young, and I didn’t know them as “the Flintstones at all.” They were “Los Picapiedra” (something like “The Pickstones”), and not only that, but I had no idea who Fred or Barney were. Instead, I knew Pedro Picapiedra and Pablo Mármol (something like “Peter Pickstone” and “Paul Marble”). I liked them, and they felt familiar and comfortable. They spoke with a Spanish accent very close to mine and they used expressions that were similar to how my parents spoke.

It wasn’t until I got older and got more experienced that I realised I had been lied to, like many other lies we tell children. Pedro and Pablo weren’t a caricature of my Mexican lifestyle at all, but of a different, 1950s lifestyle from another country up north. I didn’t exactly feel cheated or lied to, but it was another cool new thing to learn about the world. I still felt much endeared to the original names and to this day, if I have to watch the Flintstones, I’d much rather view them as Los Picapiedra instead.

# Other Lies I Grew Up With

This wasn’t the only time this happened. Calvin & Hobbes fooled me too. This time their names didn’t change, but their language did. Calvin spoke to me from the comic book pages with a hip, cool Mexico City slang like other kids my age would use to elevate themselves in the eyes of other kids. Calvin talked about the prices of candy and magazines in pesos, with peso amounts appropriate for the time of publication, and used phrases like “hecho la mocha” (something like “made a blur”) when he said he was gonna do something very quickly. His mother sounded like my mother. This time the deception was even better, and for the longest time I honestly thought Calvin was a Mexican kid like me.

And there were others. The Thundercats were Los Felinos Cósmicos (something like “Cosmic Felines”), the Carebears were Los Ositos Cariñositos (something like “The Little Loving Bears”), and The Little Mermaid was La Sirenita (interesting how mythological sirens and mermaids are different in English but not in Spanish).

Again, as I grew up, so did my languages, and I was able to experience the other side of the localisation. It was always a small revelation to realise that the names I had known were an alteration, that the translators had taken liberties, that the stories had been subtly tampered with. In some cases, like with Calvin, I was thoroughly fooled.

I’m of the opinion that the translators and localisers of my youth performed their task admirably. A good translator should be a good illusionist. Making me believe that Calvin was Mexican or that the Flintstones could have been my neighbours is what a good translator should do. Translation is always far more than language, because languages are more than words. A language always comes with a culture, a people, habits and customs. You cannot just translate words alone; you have to translate everything else.

Only bad translators believe in the untranslateable. Despite differences in language, culture, and habits, a translator must seek out the closest points of contact across the divide and build bridges on those points. When no point of contact exists, a translator must build it. A new pun may be needed. The cultural references might need to be altered. If nothing else can be done and if there is time and space for it, a footnote can be the last resort, when a translator admits defeat and explains the terms of their surrender. Nothing went according to keikaku.

The world has changed a lot since I was a child. It has gotten a lot bigger. We have more ways to talk to each other. As a result, it’s getting harder for translators to perform their illusions.

# Modern Difficulties of Translation

With the internet and other methods of communication, a more unified global presence has become more important. Translations now have to be more alike to the source material. Big alterations to characters’ names or, worse, to the title of the work, are now out of the question.

Thus we get The Ice Queen becoming Frozen, because it’s good marketing (things didn’t go so well last time we made a title about a princess or a queen), and Frozen she shall be in Spanish as well, leaving Spanish speakers to pronounce it as best they can. As a small concession, we will allow the forgettable and bilingually redundant subtitle “Una Aventura Congelada” (something like “A Frozen Adventure”), but overall, the trademark must be preserved. There’s now far too much communication between Spanish and English speakers to allow the possibility of losing brand recognition.

Something similar and strange happened with the localisation of Japanese pop culture. We went from Japanimation to anime, from comics to manga. The fans will no longer let a good lie in their stories, and while we will grandfather in Megaman instead of Rockman or Astroboy instead of Mighty Atom, from now on new material must retain as foreign of a feeling as possible, because we now crave the foreign. It doesn’t matter if we really can understand it as closely as the Japanese do, because we crave the experience of the foreign.

The reverse also happens and the Japanese try their best to assimilate the complicated consonants of English into their language, but they have had more practice with this assimilation. Their faux pas have been documented on the web for the amusment of English speakers.

# When Lies Won’t do

I should be more fair to translators. Sometimes, a torrent of footnotes is all that will work. Of course, this should be reserved for the written word. Such is the case of the English translation of Master and Margarita. The endless stream of jokes making fun of Soviet propaganda and Soviet life are too much of a you-had-to-be-there. Explaining the jokes sadly makes them no longer funny, but there’s no other recourse except writing a completely different book, far removed from the experience of a modern Russian reading a Soviet satire.

But it doesn’t have to be this way. The Japanese translations of Don Quixote works without burdening the readers with the minutiae of life from a time long, long ago, in a country far, far away. Don Quixote’s exaggerated chivalric speech is rendered in Japanese translations as samurai speech. Tatamis suddenly appear in a place of La Mancha that I don’t care to call to mind.

And that’s the best kind of translation. The one that works and makes the fans love it, that makes them feel like they belong in this translated world.

## September 07, 2018

### Sudeepam Pandey

#### GSoC: final post

Welcome to the final post regarding my Google Summer of Code 2018 project. In this post, I'd like to talk about the overall work product and how it corresponds (or varies) from the original plan. Then, I would like to acknowledge some suggestions of my mentors and talk about some new ideas that were recently discussed with them.

However, before talking about any of those things, I'd like to share the code that was written down in the last twelve weeks. So here is the link to my public repository where all the code can be found and here is a patch that can be merged with the main line of development.

Now, coming to final work product, functionality wise, the feature turned out to be exactly what it was supposed to be, a fast and accurate way to suggest corrections for typographic errors, made while working on the command window of Octave. The difference, however, was in the way of implementation.

My original idea was to make a Neural Network for this problem and I did go to some lengths to make that happen. Precisely, I did collect some data about the most common typographic errors made by Octave users and did code up a small model that could learn the correct spellings of a few commands of Octave. At the time, the motivation behind the Neural Network model was to have an algorithm that could work better than the existing algorithms that are used to compare two strings, in terms of the speed-accuracy trade-off.

However, during the community bonding period, some loopholes in my Neural Network implementation were pointed out by a few members of the Octave community. As a student who wants to pursue a career in data science, those counter points, and further research that was done on the Neural Network approach during the third phase of coding, turned out to be invaluable, for it taught me that 'Neural Networks + Data' is not a magical combination that solves every problem of this world. Maybe they can, but sometimes, simpler, more optimal solutions exist, and in those times, one must look at those solutions and optimize them further according to the problem at hand. Somewhere down the line, it also gave me a better understanding of the nature of Neural Networks.

Now, coming back to the technical details of this project, to summarize it all, I used the faster variation of the edit distance algorithm, the one that uses dynamic programming, and optimized it further by reducing the sampling space on which the algorithm had to work on. To reduce the sample space, I analyzed the data that I had originally collected to make a Neural Network and based on the results of the analysis, I was able to make certain assumptions about the misspellings. These assumptions coupled with some clever data organization techniques helped me code up a fast, and yet very accurate version of the edit distance algorithm. One can read about this implementation in great detail in the previous blog posts.

The plan was to replace this algorithm with Neural Networks, during the third phase, 'if' they happen to perform better. As of now however, I found no way to make a Neural Networks perform better than what had been already made and so the suggestion engine still uses my original algorithm.

Additionally, I had to write the documentation and the tests for all of my code during the third phase of coding and I am glad to say that this work has been successfully completed. The main documentation for the m-scripts can be seen in the help text of those scripts. Besides that, I've also written down the documentation for the database file in a markdown file that is included with the database.

I must acknowledge the fact that Nick had guided me very well on how the documentation should be done, during the second phase evaluations. I did keep his guidance in mind while writing the documentation and the tests, and have, hopefully, made a well documented, well tested product.

Now, although, the main documentation should be enough for anyone who wishes to understand how the feature works, if any additional help is required by anyone, the previous posts of this blog (which contain a very detailed explanation), and the public mailing list of Octave (which I shall continue to follow), should be a good place to visit.

During the community bonding period, Rik and I had discussed the importance of an on/off switch for this feature. This switch was already created by the time the first evaluations took place, but during the third phase, I took some time to wrap up this toggling command into a nice m-script. The users can now do a simple >>command_correction ("off") to switch off the feature and do a simple >>command_correction ("on") to turn it back on.

Next, I'd like to talk about something that Doug recently mentioned to me. He asked me if I could think of some way in which we can track the identifiers that don't get resolved by my software. Essentially, this problem is directly related to the maintenance of the database file. With Octave under constant development, new identifiers will be created and some identifiers will deprecate as well. To make sure that the correction suggestion feature does not loose its value, the database of the identifiers would have to be updated in some regular intervals of time. Maybe an update every 6 months would be enough.

Currently, I've included a markdown file with the database that explains how this update can be done, and for now, this update could be done manually only. For now, I cannot not think of a way in which the database file gets automatically updated. Later on, maybe I or someone else could come up with a way to make a program read the release notices of Octave and its various packages and then modify the database accordingly. Maybe this could be a GSoC project for a future batch of students?

So in conclusion, the planned part of the project is absolutely complete and we have already started thinking of ways in which this feature can be improved. For further testing of the current implementation of the feature, I'd need the support of the members of the community. I would really appreciate it if anyone could try this feature for themselves and see if they could break it, or find any other kind of bugs, or maybe suggest some changes to the suggestion engine that could speed up the feature, or, maybe do something as small as pointing out some pieces of code where the coding style has not been followed properly.

Finally, I'd like to thank the Octave community. Working with them was an invaluable learning experience and I hope to be able to continue to associate myself with them for the years to come. :)

## August 13, 2018

### Erivelton Gualter

#### Final Post

The Google Summer of Code program is over and I am positive I have gained so many experience in this period and additionally I have been done a significant work for GNU Octave about the Sisotool. Therefore, in this last post I all will go over in the project, describe …

# Octave Code Sharing

UPDATE: This post can actually be seen in the project wiki as well. I forgot to push it on github.io.

Now that the coding period has finally ended, I’m happy to share the summary of my project.

My project was Octave Code Sharing. As the name suggests, it mainly focused (the first half of the project) on “sharing” the Octave code. The intended design was, the user should be able to push their octave script onto wiki.octave.org for others to use and see.

For this, I needed a script to produce the script in a format that is acceptable by MediaWiki. My mentor had already done this bit by implementing the publish function. What this does is, it takes the user script and converts it to the prescribed format (if available), by default being html. So to convert it to wiki format, we needed __publish_wiki_output__.m, which I refactored a bit. This file is responsible for producing the wiki output. publish is used to parse the script file. After this, the main task was to find a way to upload the formatted file to the wiki server. An excellent reference point for getting started with this was this script. The script was written by the mentor as a bash script. So, I had to convert this script into an Octave counterpart so that we won’t need to execute third party code (bash script in this case) to transfer the content of the file to the wiki server.

Because the backend of Octave is mostly C++, I needed to use libcurl library to perform the actual transfer, because Octave does not have its own implementation of such a library and many other areas of the software use libcurl as well.

The workflow is like this:

1. Input the script, user’s password and username to publish_to_wiki function, which converts the script to wiki format using publish function internally and saves the output file to a directory named html (analogous to MATLAB).
2. The same function then picks out the figures from the script, if any.
3. Then it inputs the figures, formatted output file content and credentials to wiki_upload script.
4. wiki_upload script then establishes the connection with the wiki server by asking for a login token.
5. It stores the cookies in a temporary file.
6. CSRF Login is then performed to the wiki server and an edit token (for editing a page) is obtained.
7. The wiki formatted output file from publish is then uploaded to the server.
8. Figures are uploaded at last, with proper verification such that it doesn’t upload a particular figure if it alreadty exists on the server.

To test yourself, you may want to use the following command:

publish_to_wiki ("script_file", "username", "password")

This should place your script on a URL similar to https://wiki.octave.org/script_file.

An already performed example of this is, when I tried to publish a script named intro.m on the test wiki server set up by the mentor. I used the command,

publish_to_wiki("intro", "myUserName", "myPassword");


An important point in this process is the storing of cookies, which was performed by libcurl and set up in liboctave/url-transfer.h. The most challenging part in this part of the project was to figure out the uploading of figures, which I was able to do in a week or so. This needed me to read a lot of documentation and codes. Finally, the figures were uploaded using the form_data_post function.

This was the first half part of the project.

Next was implementing the MATLAB compatible RESTful services for Octave. The project included implementing the weboptions, webread and webwrite functions. Other implementations of the suite will be followed in post GSoC work.

In this part of the project, working on weboptions was very fun. I got to learn a lot about the internal processing of function files and how Octave uses handle classes to cater the needs of user defined classes. Also, the way getters and setters work is really nice, even though there is still space for Classdef in octave.

weboptions is used to put header fields in the other two functions.

webwrite is used to POST something to a URI. It can also take a weboptions object as an argument to amend the header fields.

webread is used to GET something from a URI. It also accepts weboptions object as an input.

You can refer to the help text of the functions to know more about them. A few of the weboptions’ options are not yet implemented, which are explicitly listed in its help text, after discussion with mentor, so they won’t work with webwrite and webread either.

The challenging part in this part of the project was, passing the weboptions object from the octave script function to the C++ backend and then performing the required operations on them. These functions, too, use libcurl for sending HTTP requests.

All the goals of the project were met. There were times when I was unable to get the thing done, be it trivial or not, but with continuous lookup and exploring, I was able to get them done. Sometimes, there are reasons that are out of your reach, like the sudden power failures that I encountered in my machine during the last phase. So, Kai, my mentor, being at his best, understood the concern and extended the weekly plan by a couple of days after I requested. I had also implemented a test server in java for GET and POST request only to find a week later that there’s one easy alternative of https://httpbin.org/get and https://httpbin.org/post, respectively.

## Further work after GSoC is over.

As for the publish_to_wiki function, all the tests that I did were as desired, however if any error is reported it will be sorted out then and there.

There are a few options left in the weboptions which are not currently implemented (this was intended). It would be extended as well using jsonencode and jsondecode, essential for the next round of renovation in this function.

webwrite works pretty good in case of sending HTTP requests in text form, but it is still unable to resolve a query list of JSON form. That is one thing that’s on my to-dos.

I’m currently trying to polish webread, although it is in workable state, but still there’s space for some improvement. For example, when I try to GET an image from a URI, I am unable to decode the binary data from the output stream to convert it to an image. So this is what I’ve been working on currently.

The period between the start of the second phase and mid of the third phase were really fruitful. I had challenges to solve and that’s when I got adrenaline rushes many times.

All in all, the project was very exciting, as opposed to my initial impression, where I thought it to be a web development project and using HTML/CSS and all. I really learned, not only about Octave and its codebase but also many more things like, bitbucket, libcurl, MediaWiki, etc. to name a few. I also learned to manage things on time. This was one of the greatest perks of doing the GSoC.

Needless to say, this all happened with the help of my supportive and understanding mentor, Kai, who timely synced up with me, even though I used to hear from my peers that their mentor doesn’t respond timely, etc. He always appreciated my efforts and had a sound idea of what task would take what duration. He also expressed his unsatisfaction at some time, which really gave me a boost to perform better. And not to forget the little things that he noticed, like checking out which student has been putting regular posts on planet octave, etc. I’ll always be thankful to him for selecting me for GSoC. Also, I’m thankful to jwe, for introducing such a cool software, that too at no cost. The list may go on without a stop, but my heatiest thanks to all the maintainers and mentors who helped me bring the best out of myself. I’ll want to keep working with Octave community in future as well!

To a great “codeful” summer, 2018!

## CODE

Complete diff of the work done in the project.

Commit to the first half part, Octave code sharing - a5b41a9

Commit to the second half part, RESTful services - a5e8a2f

The bookmark ‘ocs’ is for GSoC project.

## July 29, 2018

#### Week 15

With the last week of GSoC arriving, I’m also in a position to wrap up the project. This week I’m focusing on thorough testing, documentation and other code optimizations. I’ve already refactored the existing code of third evaluations a bit in two different drives. In those, I’ve made the same internal function for setting up various cURL options for webread and webwrite functions. Also I’ve reduced a significant lines of code as well in some files, because I believe writing quality and useful code is better than writing quantity and useless code, as was suggested by Kai as well in early days of the project.

I’ve also implemented webread function the previous week.

## July 19, 2018

#### Week 13 & 14

UPDATE 2: I found the cause of the urlread function not working correctly. It’s because curl sends the request as application/x-www-form-urlencoded whereas it should be application/json, which is explicitly changed in the cURL request that I’ve written down. I think this could be solved using the webwrite function that I’ve written.

Now coming to the project work, I’ve been wandering inside the codebase for the last three days to implement the webwrite function. This is what I’ve implemented as webwrite.m. It takes input and arranges them for further processing in currently what we call as __internal__ function. A sudden question could be why two different, so this is because the processing of the input is easier in Octave than in C++ and we really need to work with cURL here so C++ cannot be sidelined. MATLAB can take the following as a string with 42 as a number str = ["His", "age", "is", 42] whereas in Octave, we get the ascii value corresponding to 42 in its place. This is one difference what I observed. Due to this, I’m asking users to pass strings all along, without any numbers. But I’m not sure how I can test above str if it contains a number, so that I could warn/error the user that what he entered is not what Octave will get as. See this FIXME. And earlier I was sending a flag if the user has supplied some weboptions or not, but now if they haven’t, I’m supplying the default weboptions object, afterall it’s good to use your code when you’ve put some time to write it!

Coming to the __internal__ function, this took my most of the time. I’d like to explicitly write two lines of code here, even if you do not understand them, kindly read them.

  octave_classdef *obj = args(nargin-1).classdef_object_value ();
cdef_object object = obj -> get_object ();


These two lines look small, but took my most time and I was happiest when I got these working. This takes the weboptions object supplied from the m-script and gives us the C++ equivalent. I wrote this here just for my satisfaction anyway. The supposed working is, unpack the object and send the contents to an internal function to set the cURL headers which need to be amended. I was first trying to use map_value () function of the cdef_object which essentially, maps the Keys (as strings) to their values (as cdef_property), see this but because this always gives a warning, which is right, I refrained from using it because employing this in a function will decrease the UX quality of the software. Instead, I extracted Keys as strings and then queried their values. To get away with the warinings, I excluded two Keys (“delete” and “display”) for now. But this is a bad design because although there won’t be any such keys in a weboptions object, but still we should be handling it some other way, currently which I do not know, I’ll ask Kai or jwe about this.

The other thing that I just got to think was, I’m unpacking the object in __internal__ function, and then sending it to some other function. Why not just send the object there and unpack it there? Reducing a lot of lines of code? To unpack, I wrote an equivalent struct and then I’m passing this to set_weboptions function, which then puts the corresponding options in cURL. I’ll change this to simpler format so that we won’t need struct. There are a few FIXME tags in this function for which I’ll need the help of mentor and other maintainers. For example, I searched the code to find how to push back values in an Array<std::string>, but couldn’t find it out. This will save us a lot of time because now that I know how Classdef’s API work in C++.

I also changed http_action function and completely removed the previous http_get and http_post functions to accomodate new HTTP methods like DELETE and PUT, Kai too had complained about this being a bad design back in phase one, so his query should also have resolved by this. Also the error in this function is not propagated correctly to the interpreter. See the FIXME in the function. Oh, and the function set_header that I wrote in phase two can be of good use now in phase three, so that’s good.

Also, http://httpbin.org is a cool site to throw requests for testing, rendering my server useless. Anyway, I learnt a thing or two about that as well with this.

A few other small detailings are left which will be cleared as soon as me and Kai meet on the IRC.

I must say, initially I was over expecting, but Kai correctly saw the depth of the project and resisted me to add more work given that I still spend around 30-35 hours a week to the project (I hope this is not offending anyone!). Writing this function was a great experience. I generally get headache working on laptop, but I could seemlessly work without any long break for around 18 hours a day. Wrapping up this week’s post, I hope someone might reach here reading it. Feel free to suggest/advice me on the work.

Thanks and have a good time.

UPDATE: There’s some problem with urlread function in octave when sending post requests. Because my this week’s work is related to the same, I’ll check what exactly is getting wrong. You can try the curl command in the meantime. Also, the server will respond (most probably) at every time form now on.

Hi!

I’m pleased to tell you all that I’ve got good remarks in second evaluation of GSoC! Kai didn’t complin about what I was lacking in the first evaluation. I’m really thankful to him for this.

Coming to the next and final phase of the GSoC, I have to implement webread and webwrite functions in Octave. I had some issues with getters and setters in weboptions which have now been rectified.

To check the working of the two functions, I needed a server that could entertain HTTP request and let us verify the desired results. For this, I chose Java Play! Framework, which is a super cool tool in JAVA that serves our purpose well. Needless to say, it is RESTful by default. It works on MVC design pattern. Without wasting everyone’s time, I’d like to introduce a few files that are relevant to our project.

The first and the foremost is the routes file, which binds our app to the outer world. Routes can also be called as endpoints, if anyone is aware of. We write all the HTTP requests (GET, POST, PUT, etc.) here and map those endpoints to our corresponding functions that take action on the requests. As an example, you can see that GET request to the webhook endpoint is mapped to webhookGet() function. Similarly, we can write functions as per our need and do the needful.

Next file is a controller. Controllers are the files that essentially parse/decode your request into smaller pieces and the applies some computation with the help of auxillary functions defined elsewhere. The above mentioned webhookGet() function is what is written in a controller.

The last one is application.conf which has all the configurations that are needed to run the project. The server is run using the sbt run command and shipped for production using sbt dist command. Only you need to have a JAVA environment to make these command work. You can try to play with it using the source code from here. Note that the latter command cannot be run on the production server unless you have high memory for it, I myself, do the dist command on my machine and then rsync the zip file to the server, although there are other easier methods available like using CI server, etc. Kindly feel free to ask me anything related to the framework or anything else related to project, in general.

Now that the server has been set up, we can hit GET and PUSH requests on it and tweak the behaviour of what happens with our requests as per our need. Currently, the server will behave as an echo server, so it will let you know what you sent.

To check if the server works as expected for you, you can issue the following command in Octave:

s = urlread("https://batterylow.me:9000/webhook", "get", {"mode", "testing", "verify_token", "theTokenToAccount", "message", "Post back this message to me"})

The last parameter in the above cell string should be returned to you. Note that mode and verify_token should be testing and theTokenToAccount respectively, because a request only succeeds when the above two pairs match, as you can see in the source code. Remember to use https protocol while sending the request, because it doesn’t accept requests from http (even I spent considerable time on this silly mistake!). You can use curl as well by invoking the following command:

 curl -H "Content-Type: application/json" -X POST "https://batterylow.me:9000/webhook" -d '{"name": "yourName"}'

Ofcourse, you can easily send the above two requests using octave as well like this:

GET Request:
s = urlread("https://batterylow.me:9000/webhook", "get", {"mode", "testing", "verify_token", "theTokenToAccount", "message", "Post back this message to me"})

POST Request:
s = urlread("https://batterylow.me:9000/webhook", "post", {"name", "yourname"})


One hurdle for me right now is my system, it keeps crashing! At all unknown times! I’m trying to get it work asap so got late to complete the webread function. Today as well, I got some space now to write the post. I’ll update this post in a day or two with my implementation details of the function. (I had intimidated Kai about this.)

NOTE: You may not get the response from the server because I myself need to run it on that machine. Kindly let me know on IRC if you need to suggest/advice anything. I’ll definitely reply if my machine might be working and I may be online!

Till then! :-)

## July 09, 2018

#### Week 12

I’ve implemented the weboptions function. I also added relevant comments wherever needed, including the code of first evaluation. This completes the second phase evaluation task, i.e, implementing wiki_upload script and weboptions. There was an unprecendented delay of a week when I was trying to upload the images on the server. It would’ve given me a head start if I had completed the task without the delay, although I am still on the track. There’s one thing left, the help text that I wrote for weboptions isn’t being displayed when I issue help weboptions because it’s a Classdef and not function. Other than this, I feel the task has been executed correctly.

Coming to the next task, I plan to implement webread and webwrite function with MATLAB compatibilty. I think I can change wiki_upload script to be a generic one and then use it for both wiki and webwrite. This would help us extend it with any other functionality in the future.

Note that a few of the weboptions fields are currently for MATLAB compatibilty, as described in the previous post. Also, second phase evaluation will take place this week. Hoping for not giving Kai a chance to complain! :-)

## July 06, 2018

### Erivelton Gualter

#### Second Evaluation - week 8

So, here is my last post before the second evaluation. If you have been following my blog or the octave blog, you know that the purpose of this google summer of code project is to create an Interactive Tool for Single Input Single Output (SISO) Linear Control System Design. Also …

## July 03, 2018

### Sudeepam Pandey

#### GSoC project progress: part three

The goal for the second evaluations was to code up a complete, working, command line suggestion feature that supports identifiers and graphic properties of core octave and all the octave forge packages. I am happy to say that this goal has been achieved and we do have a working suggestion feature now. The link to public repository where the code can be found is this.

If you haven't already, you should read my previous posts to find out what the community wanted the feature to look like and how much progress had been already made. You may need that to understand the contents of this post. In this post, I would like to talk about the additional work that has been done and the work that will be done in the days to come.

At the time of the first evaluations, one of my mentors, Nicholas, expressed how he would be interested in seeing how the rest of the project progresses, including the aspects related to user interface and maintainability of the code by other developers. I'd like to address these points first.

So the UI is relatively simple. You enter a misspelling and some suggestions are displayed. We could have tried adding some GUI pop-ups but I refrained myself from trying to do those. There were two primary reasons for that.
• First reason is that a GUI pop-up looks very unpleasant when you are working on the CLI of Octave, but honestly, that is more of a personal opinion I suppose.
• Second, and the more strong reason is that adding a GUI pop-up would have been a really complicated task due to the way octave handles errors and would have resulted in things like displaying of the "undefined near line..." error messages for the misspelled command, after the correct command has been executed.
There are some other reasons as well which have been discussed with the members of the community before. Obviously we can try changing things later on, if we really want to, but as of now, suggestions are simply displayed and the user can just use the upward arrow key of their keyboard and edit the previous command to quickly correct their misspelling.

I have accounted for code maintainability as well. I moved a few pieces of code here and there (see commit log) and have made the feature in a way that . . . all the code related to the UI, or how the feature presents itself to the user is in a separate file (scripts/help/__suggestions__.m) and all the code related to the suggestion engine, that generates the possible corrections for the misspelled identifier is in a separate file (scripts/help/__generate__.m). A lot of comments have been included in the code and the code is simple enough to be red and understood by anyone who knows how the Octave or MATLAB programming language works. Another important point is that, all the graphic properties and identifiers of Octave core and forge with which a misspelling can be compared have been stored in a database file called func.db (examples/data/func.db). I had described this file in my in my previous post.

Maintainability shall be very easy due to such an implementation. If UI changes are required, major changes must be done only to the file __suggestions__.m. If the algorithm of the suggestion engine has to be changed, changing the code of the file __generate__.m shall be enough and if new identifiers are added to octave (something that will be constantly done), including them in the well organized database file (which can be very easily done with a load>edit>save) would be enough.

Now I'd like to describe the other tasks that have been done in this coding phase. These include adding the support for the remaining packages of Octave forge and adding support for the graphic properties.

Including the remaining packages of Octave forge was very easy, all I had to do was, fetch the list of identifiers, clean up the data a little, and include it in the database file.

The challenging part was adding the support for graphic properties. This was mainly because of the fact that it required me to write a C++ code for a missing_property_hook() function which had to be similar in architecture to the already existing, missing_function_hook() function.

In the codebase of Octave, missing_function_hook() is a function that points to a particular m-script which is called when an unknown command is encountered by the parser. Like I had described earlier, I had extended its functionality to trigger the suggestion feature when an unknown identifier was found. The missing_property_hook() had to do something similar, call a certain m-script when an unknown graphic property is encountered.

Rik helped a lot with this part and finally, I was able to code up a missing_property_hook() function which would trigger the suggestion feature when an unknown graphic property is encountered. Although, the code does what it is supposed to, I'd be honest here and say that this part is still a little black-box to me. I'd appreciate it if some other maintainer who is good with c++ and familiar with the code of the missing_function_hook() function would take a look at the missing_property_hook() function and point out or fix any issues that they find.

I'd like to mention that the suggestion feature differentiates between the levels of parsing, i.e. whether the trigger is an unknown property or an unknown command, by looking at the number of input arguments. The rest of the functionality is same.

With all these things done, I was able to realize a complete and working command line suggestion feature and complete the goal that was set for the phase two evaluations. Future work that had been planed for phase three of coding includes writing the documentation, writing some tests, fixing any and every bug that is reported, and seeing if I could use a better algorithm for the suggestion engine. An additional thing that I would like to do is to nicely wrap up the on/off button and other such user settings into a single m-script for better user experience.

Since the phase two work is done, I'll start working on these things that have been planned for phase three from tomorrow onwards. I'll publish another post when I make some more significant changes, till then, thank you for reading and goodbye.

## July 01, 2018

#### Week 11

This week I fixed some styling in the previous codebase and other minor issues. Me and Kai had a very fruitful discussion this week for the roadmap ahead. We discussed many things.

Kai verified that wiki_upload.m (an important dependency for publish_to_wiki.m) works as expected including the checks for uploading the same image again, wikimedia itself does a hashcheck before uploading and sends an error if the image is identical to the previous one or a warning if the image already exists on the wiki server. You can check this by calling:

publish_to_wiki ("script", "username", "password")

where script, username and password correspond to the script that you want to publish, your username and password for wiki.octave.space (this will be changed to wiki.octave.org later) respectively. One problem that I noted was Octave hangs up if the connection gets lost in the middle of the transfer. But this is not native to this function, it affects all the files that require to use libcurl’s interface to Octave. Nevertheless, I tried to look into it, with the function CURLOPT_TIMEOUT and other, as well. Unfortunately, I couldn’t get a viable answer because the timeout value in the above function is measured from the start of the transfer and NOT when the connection gets lost. So suppose if someone has a large transfer, then setting this value is potentially dangerous because their tansfer will get cancelled in the middle of the process even though the connection is good. Other than this, there were provisions to exit the API if the speed gets below a given threshold. This again cannot be guaranteed because it might be that the user may have a lesser internet speed. So, the idea of resuming Octave was dropped altogether, its again upto the user to get a proper internet connection! Other than this, there was the quest of making test cases for wiki_upload.m for which I suggested to measure the Content-Length of the query string that is formed as part of the transfer. This again has problems, because the libcurl API for Octave doesn’t let the developer know the actual query string which is transferred over the connection. FYI, the query string formation takes place here. This would only add a big overhead for the returned values from the C++ API to the Octave code because all of the functions that use the perform () (it actually performs the transfer) function will need to be changed. I initially had suggested this because during debugging I had actually changed the form_query_string function according to my needs. But then I realised that this is not feasible because:

• As mentioned above, there would be unnecessary overhead to change all the functions because of change in form_query string function. Also, a number of other already implemented functions use it. So, regression can occur.

• We actually cannot measure the actual Content-Length properly, because the content is sent in UTF-8 format which replaces all the non-alphabetical characters with their UTF-8 version. So a whitespace is changed to %20. Of course there are other ways to measure the UTF values for all the characters by running various for loops and adding offsets to a given values to find another value, etc. But I do not think this is a good measure to test the main working of wiki_upload which essentially needs to check if the file has been uploaded or not on the wiki.

Will see if there’s anything else I could do about this, else I will need to manually check at various times if the file has been uploaded correctly or not.

Lastly, the commenting part, which I don’t want to loose marks for, is left and I’ll be doing that in the coming days, before the evaluation so that Kai doesn’t get a chance to bash me on this. ;-)

THIS COMPLETES THE FIRST HALF OF THE PROJECT, i.e., OCTAVE CODE SHARING.

Next lined up is, setting up RESTful web services for Octave.

We had a good discussion over this too. First to be implemented is, weboptions. I initially thought of implementing it using a struct in the backend. But then Kai suggested to use Classdef in Octave and making an object. Of course, he was right, one simple and foolish reason being, weboptions in MATLAB doesn’t display the Password field and if struct is deployed for this, there’s no way that the password is stored in a plain text and displayed as obscured.

Here is what I’ve implemented for this. It almost acts like the weboptions in Matlab with subtle differences. For example, when you call the following, it will show the answer that follows:

>>    d = weboptions

<object weboptions>


MATLAB shows the values of the fields instead. For that I’ve created a method values so that when you call d.values or values(d), you get to see all the values that are set for the object. There are a few problems which will most probably get rectified as me and Kai have another meeting on IRC. Basically, I’ll need to find a way to represent cell string in it’s input form, i.e, something like

 ans =   {"foo", "bar"}


and not

ans =
{
[1,1] = "foo"
[1,2] = "bar"
}


You can set values for the various fields and members in the object by calling d.field_name = your_desired_value;. This will do as desired. Some of the fields are currently for the MATLAB compatiblity and will be dealt with later, like ContentReader, MediaType, CertificateFilename, etc.

Oh and kindly strip your latest commit on ocs (a merge commit) in your local repo, else it’ll create a new head. Sorry about that!

Wrapping up for now! Apologies for such a long post.

### Erivelton Gualter

#### Edit Compensantor Dynamics

So far, to design a controller using sisotool we need to select the desired feature to add to the compensator, such as real and complex pole or zero. In order to perform this task, we have two options. First, we can go to the main tab and select the feature …

## June 24, 2018

### Erivelton Gualter

#### Back to Coding

Results from the first evaluation came 9 days ago. All of the three gsoc student were successful! For the readers of my blog, you can find them at http://planet.octave.org/. If you already a reader of planet octave, you are in the right place.

The feedback from my …

## June 23, 2018

#### Week 9 & 10

UPDATE: I am more than happy to announce that I’ve been able to upload the images on to the wiki server!! The problem was not in form_data_post but in http_post (in my version of Octave’s repository). There were two faults:

• I was calling http_action which then used to call http_post inturn which then POSTed the data, despite the fact that there was no POSTFIELD or POSTable data with me, it was FORM submission instead. I’ve now added the perform() function in the form_data_post itself so that I won’t need to interact with the former.
• I had set the Content-Length parameter to get from the length of POSTFIELDS, which isn’t needed as of now.

Nevertheless, I’m pumped up again for furthering my work!

There has been some problem in completing the upload_to_wiki script. Unfortunately, I’m unable to send the images on to the servers. Other than that, everything is working fine. I’ve been trying to do the same since the past week, but still no success, even after tracking down the cURL’s source code.

Basically we need to have a linked list of pointers to the information that we want to send as a form data. I tried to traverse that (It is native to cURL and unseen for the end user). Everything looks fine there. The only bottleneck that I can observe for now is, the form_data_post function is not working as desired. I’ve tried all other alternatives to check where my code is not responding as per standards. But I think only the above function must have problem.

I also discussed this issue with the mentors and other people at IRC channel. However, there’s one more thing to it. The functionality that I’m currently using (CURLOPT_HTTPPOST) is deprecated now from cURL version 7.56 and instead MIME API is being used. (See this for more.) I asked Kai about this and he too agreed on the fact that I shuold be keeping backwards compatibilty, but then jwe and andy suggested me to use the latter funcitonality with MIME api and then make the feature available only to those who have cURL 7.56+. I’ll ask the mentor once again if either the problem gets solved for now and then I change it to MIME API once the work gets over, or I should instead do it now only.

Other than that, I will start implementing the RESTful services from now on, the other half part of the project. So, we’ll see how much of these functions that I’ve implemented so far get reused.

Until then! :-)

## June 11, 2018

#### Week 8

Hi! Now that my exams are over, I am back on the track. First evaluation will take place this week. I will continue my work from the last task, i.e, wiki_login.m, to complete its implementation and then use it in publish_to_wiki function that will help us upload the wiki to user’s online wiki. After that I plant to implement the RESTful services part of the project, which has weboptions function lined up at first.

I’ll update this blog with any subsequent progresses.

### Sudeepam Pandey

#### GSoC project progress: part two

In my previous post, I talked about all the major discussions that have been made with the community, what the suggestion feature would be like, how I plan to realize this feature, and how I have extended the functionality of the scripts/help/__unimplemented__.m function file to integrate the command line suggestion feature with Octave. In this post, I would like to share my progress and talk about how the current implementation of the suggestion feature is working. The link to the public repository that contains the code for this feature can be found here.

The goal for the first evaluations was to code up a small model that would show how this feature integrates itself with Octave. That part, however, was completed by the time I had made my last blog post. I had been working on a full-fledged command line suggestion feature since then and till now, I have been able to complete a working command suggestion feature that supports identifiers from core octave and 40 octave-forge packages. Lets start looking at various parts of the feature.

Whenever the __unimplemented__.m function file, fails to identify, whatever the user entered, as a valid, unimplemented octave command, it calls one of my m-script, called __suggestions__.m and the command suggestion feature gets triggered. This script, __suggestions__.m, does the following things...
• Firstly, based on the setting of the custom preference (set by the user with the command setpref ("Octave", "autosuggestion", true/false) it decides whether to display/not to display any suggestions. If the preference is 'false', it realizes that the user has turned off the feature and so it returns the control without calculating or displaying any suggestions.
• However, if the preference is true, it checks if whatever the user has entered is at-least a two letter string. If not, it again, returns the control without calculating or displaying any suggestions. This is done because it is less likely that a one letter strings is a misspelled form of some command.
• However, if the string entered by the user is two lettered or more, the script goes on to calculate the commands that closely match the misspelling. The work of calculation is done by a different script and __suggestions__.m, only calls that script to get the closest matching commands. These commands are then displayed to the user as potential corrections.
• If the misspelling of the user is short (length of the misspelling < 5), the script entertains one typo only, However, if the length of the misspelling is more than or equal to 5, two typos are entertained as well. This essentially means that for short misspellings, commands which are at an edit distance of 1 from the misspelling are shown as potential corrections and for relatively longer misspellings, commands which are at an edit distance of 2 from the misspelling are also shown as potential corrections.
Commands that closely match the misspelling are calculated by a different m-script. This m-script is called __generate__.m. It loads a list of possible commands from a database file called  func.db and then calculates the edit distance between the misspelling and each entry of the list using a different script called edit_distance.m. The commands having an edit distance equal to one or two are accepted as close matches and a list of all such commands along with their edit distances is returned to __suggestions__.m which displays some or all of these suggestions depending on the logic described before.

I'd like to mention that the strings package of Octave forge also has a function file that calculates the edit distance. It is called "editdistance.m". Therefore, to avoid compatibility issues or to avoid having two different function files that do the same thing, later on, I will include the edit_distance function that I wrote within the __generate__.m script.

### Improving the speed of the generation script

If we go on and calculate the edit distance between the misspelling and each and every identifier of octave (core+forge), our algorithm would take nearly 20 years to generate an output for each typographic error that the user makes. We, however, would like the time to be 20 milliseconds or so. For that, we use some smart techniques that reduce the sample space on which the algorithm has to operate.

To reduce the time, I've made a small assumption. I have assumed that the user never mistypes the first letter of a command. A rough analysis of the misspelling data that I received from Shane of octave-online before the commencement of the project, suggests that this is a reasonable assumption and would hardly reduce the accuracy of the suggestion feature. How good is this assumption for the speed? Well, I'd just say that, for a misspelling starting with the letter 'n', this small assumption reduces the sample size from 1492 to 36 (and that is not the best case!). The worst case was that of the letter 's' in which 178 out of 1492 commands were left. Even that corresponds to an 88% reduction in the sample size.

It is important to mention that doing this alphabetical separation at run-time would be a redundant task and a stupid idea, that would correspond to the algorithm taking 20 years again.

Another thing that we should consider, to improve the speed, is to show suggestions from octave core + loaded packages only. Obviously it is not a good idea to check among the commands that belong to a package which the user is not currently using (or worst, a package that is not installed on the user's machine).

Keeping these things in mind, I have created the func.db database file in such a way that the commands belonging to different packages are stored in different structures and are alphabetically separated as fields of that structure. So for example, func.db contains a structure called control which holds the identifiers from the control package only, and another structure core which holds the identifiers of core Octave only, and another structure signal which holds the identifiers of the signal package only and so on. The field a of the control structure (accessed by typing control.a) contains all the identifiers of control package starting with 'a', The field b (accessed by typing control.b) contains those identifiers of the control package that start with b, and so on. This has been repeated for all the packages available.

To make our __generate__.m script memory efficient as well, we load the core structure (which is always required) and then check for the loaded packages and load the structures corresponding to the loaded packages only, then, using a switch case, fetch all those commands which have the same first letter as that of the misspelling (in O(1), thanks to the way in which the database is arranged) and then proceed to the next step.

To understand the next step, we need to understand that the if a misspelling is of length p (say), and we are accepting corrections that are at an edit distance of one or two from the misspelling, then the corrections could have the following lengths only...
• p-2: Two deletes in the misspelling,
• p-1: One delete and one substitution, or one delete only.
• p: One delete and one addition, or one or two substitutions.
• p+1: One addition and one substitution, or one substitution only.
• p+2: Two additions to the misspelling.
This fact, allows us to reduce the list further and would cut out some 5-10 more entries for normal length misspellings. This logic, however, is particularly useful for large length misspellings, because commands with large lengths are very less in number. If a user misspells the command "suppress_verbose_help_message" the script would take days to suggest a correction for this command without this logic, this is because edit distance algorithm is O(n1*n2) with dynamic programming, where n1 and n2 are the lengths of the strings being compared. This O(n1*n2) is repeated m times where m is the number of possible commands that could be close matches. With this logic however, the possible list would be cut down to one or two commands only. Thus, the value of m will be reduced and the close matches will be found within one or two iterations.

That summarizes all the measures that I have taken to improve the speed of the suggestion feature. The control flow had been described before this and so that concludes the working of the suggestion feature.

### Conclusion

This concludes phase one. What's left is to include more forge packages and to include graphic properties within the scope of this feature. Writing the documentation, writing the tests, and debugging also remains but these shall be the tasks for subsequent coding phases. Till then, goodbye, see you in the next blog post. :)

## June 10, 2018

### Erivelton Gualter

#### First Evaluation - week 4

So, here is my last post before the first evaluation. If you have been following my blog or octave blog, you know that the purpose of this google summer of code project is to create an Interactive Tool for Single Input Single Output (SISO) Linear Control System Design. Also, well-known …

### Sudeepam Pandey

#### GSoC project progress: part one

p { margin-bottom: 0.25cm; line-height: 120%; }

### An Initial note....

Alright, so first of all, I would like to apologize for not writing a proper blog post up till now. I had my Final examinations during the first week of the coding period and immediately after that, to catch up, I got so involved with the coding part that I forgot to share the progress of the project on the blog. On the positive side however, I have completed a lot of work. I can safely say that I have completed the goals that were set for phase 1 evaluations (possible style fixes may be left), but that’s not the entire good news. The phase two evaluations goal is also halfway done!

Now, I do realize that I have not shared any details of my project until now, and so, in this blog post, I’ll share a lot of important details and talk about everything that has been discussed and done so far. I promise to post more often after this, ‘cumulative’ post. Here goes...

### The Project Idea....

If you've red the last blog post, you'd know that I plan to add something called a 'Command Line suggestion feature' to Octave and you may be wondering what that means. Basically this feature would do something like this...

Whenever the users make a typographic error while working on Octave's command window, the command line suggestion feature would suggest a correction to them and say something like "The command you entered was not recognized, did you mean any of the following...?"

Now I could share a detailed time-line explaining 'when' I plan to do 'what' but I believe that not everyone would be interested in reading that and so I'll skip that for now. Instead, I'll quickly talk about the following...
• What the community wants the overall project to be like.
• What are the challenging parts of the project.
• What are my evaluation goals.
• What discussions have been made, and
• How much progress has been made.
If you really would like to see my time-line then just ask for it in the comments section and I'll share a link.

### The Community Bonding Period...

By the time you finish reading this section, probably the only thing left to talk about would be "How much progress has been made". That is just a glimpse of how much the community has been involved in this project. It also shows how successful GNU Octave is as an open source community, not every open-source community is as open when it comes to discussions.

Now the first thing to understand is that this project is essentially a UX improvement, and as such, Octave is not bound by 'MATLAB compatibility issues'. This is one of the primary reasons why there was so much to discuss in the community bonding period. Here are the main points that summarize the collective decision of the community on what the overall project should be like:
• First of all, it was decided that the user interface, or the part handling 'how this feature hooks itself to octave' should be well separated from how the 'suggestions are generated'. This need was realized, immediately after realizing the fact that there are a lot of algorithms available that could be used to generate suggestions. Separating the integration and generation part would allow us to make sure that, in future, if a faster or a more accurate algorithm to generate suggestions is discovered, replacing the existing implementation becomes easier.
• Secondly, a few problems such as, a very large output layer size, and failure on dynamic package loading were found with my proposed Neural Networks, based approach. Therefore, we decided to use a well established approach called the Edit distance algorithm for now and the Neural Networks based approach will be the 'research part' of the project. Essentially, the plan is to first use 'smart implementations' of the good old Edit-distance algorithm to realize this feature and to research and see if a Neural Network could do better after that has been done. If later on, we realize that a Neural Network (or for that matter, any other approach), really could do better than the Edit-distance approach, the algorithm can be replaced very easily (thanks to the previous point).
• Next, we decided to include keywords, functions, and graphic properties within the scope of this feature. Very short keywords, user variables, and internal functions will not be included in its scope. Deprecated functions would also be included in the scope for now. Essentially, corrections would be suggested for typos close to anything that is within the scope of this feature and would not be suggested for anything that isn't.
• Also, we decided to use the missing_function_hook() to realize the integration part of this feature. More about this later in this post.
• Lastly, we decided that it is absolutely necessary to include an 'on/off switch' type of command that would let the users decide whether they want to use this feature or not. We plan to use custom preference for now to do this.
That summarizes the most important discussions that took place and with that, we are in a position to talk about how the second point and the last point are directly related to what are the 'challenging parts of the project'. Let's start with that.

Essentially, the second point talks about the algorithm that will be used to generate the corrections that are ultimately going to be shown to the user. The challenging part is that this algorithm should provide a minimum speed-accuracy trade-off. I did know about the Edit-distance algorithm beforehand but I initially believed that a Neural Network would outperform it in terms of the speed accuracy trade-off. Discussing the idea with the community made me realize that there are some critical loopholes in the Neural Network based model and although they could definitely be improved with more research, I should not jeopardize the entire project just to proof that Neural Networks could do better. We therefore decided to do what I had described earlier in the second point.

At this point, defining a 'smart implementation' of Edit distance remains. Basically, Edit distance is a very accurate algorithm that quantifies how dissimilar two strings are. The only problem with it is its speed (my primary reason for initially proposing a trained Neural Network). Essentially, by a smart implementation of the algorithm, we mean an implementation which would maximize the computation speed by reducing the sample space, on which the algorithm has to work on. This would be done using some clever data management techniques and some probability based assumptions. Some discussions related to these were also done during the community bonding and since then, I have been looking at a lot of suggestion features of other free and open source softwares to device some clever techniques. Good progress has been made but I'll share that in another blog post.

The last point talks about a very important 'on/off' feature, the tricky part with this was that Octave comes in both a GUI and a CLI and so a common method that does the job could have been hard to find. However, this problem was solved with relative ease, and we decided to use custom preference to realize this part. This gave us a simple and common command to switch on/ switch off the feature.

These discussions led me to reset my term evaluation goals which are as follows now:-
• Phase-1 goal: To code up and push an algorithm independent version of the suggestion feature into my public repository. Essentially this would show how this feature integrates itself with Octave.
• Phase-2 goal: A development version of Octave with a working (but maybe bugged and surely undocumented) command line suggestion feature integrated into it.
• Phase-3 goal: The primary goal would be to have a well documented, well tested and bug free command line suggestion feature. The secondary goal would be to research and try to produce a Neural Network based correction generation script that outperforms the edit distance algorithm.
...and that, marked the end of the major discussions and the community bonding period.

So far, I have coded up the phase-1 goal. The public repository can be seen here. It very well shows how we have used the missing_function_hook() to integrate the feature with octave. The following points summarize the working:
• Essentially, whenever the parser fails to identify something as a valid octave command it calls the missing_function_hook() which points to an internal function file, '__unimplemented__.m'.
• This file checks if whatever the user entered is a valid, unimplemented octave (core or forge) command or if it is a implemented command but belongs to an unloaded forge package. If yes, it returns an appropriate message to the user and if not, it does, or rather, used to, do nothing.
• To realize the suggestion feature, I have extended its functionality to check for typographic errors whenever the command entered was not identified as a valid unimplemented/ forge command.
By using the missing_function_hook() the keywords and built-in functions were automatically bought into the scope of this feature. Graphic properties remain because there is no missing_property_hook() in octave right now. I have discussed this with the community and I'll try to code it up in the subsequent weeks.
Besides that I have also figured out how the Edit Distance algorithm can be made 'smart'. I'll push an update and write another blog post as soon I master and code up the entire thing. For now, thanks for reading, see you in the next post. :)

## June 03, 2018

### Erivelton Gualter

#### Geeting closer to the First Evaluation - week 3

First Evaluation period is around in the corner. As was proposed on my first post of my timeline, the work I have been doing is on time.

For this past week, I added some functionalities to the Root Locus Editor. This time, the user can add: real poles, complex poles …

## May 29, 2018

### Erivelton Gualter

#### Plots are Working - week 2

Here we go one more week of code. This week I have continued my work from previous week related to interface of sisotool. Just reiterating what was done last week, I created a couple GUI to understand a little better how the UI Elements works in Octave. For this week …

#### Week 6 & 7

As I’ve already completed my first evaluation work and got it reviewed the same from mentor, I won’t be doing much work in these two weeks due to my end semester exams. However, I’ll be continuing my post from week 8, positively.

## May 21, 2018

### Erivelton Gualter

#### Code begins - week 1

The first week of coding has been completed.

As I mentioned in the last post, the goal of this previous week was to create a fixed layout to study the plot diagrams for the sisotool, as well as to add some UI Element functionality to control the interface. The following …

#### Week 5

With 25th May approaching, I’m confident to say that I’ve completed my first evaluation (although some grammatical and styling rechecking is left). Now, as directed by Kai, I’ve implemented the wiki_login.m function from using the internals of url-transfer.cc/h class. I’ve scapped the earlier design of implementing cookie_manager.m script and libcurl_wrapper.cc class for doing the cookie thing and directly implemented __wiki_login__ as an internal function in urlwrite.cc. A user is now able to log into api.php

To test my implemtation:

• Make a test account on wiki.octave.space with username and password of your choice.

• hg clone https://me_ydv_5@bitbucket.org/me_ydv_5/octave

• hg up ocs

• cd path/to/your/build/tree/ and execute make -jX

• ./run-octave

• execute wiki_login ("https://wiki.octave.space/api.php", yourUsername, yourPassword);

If it is all successful, you should see a prompt Success: logged in as yourUsername.

Other than that, I’m really grateful to jwe for moving the version.h file to liboctave from libinterp as this is needed to get the OCTAVE_VERSION for the user agent (see week 3 for this). Eariler, it was in libinterp and we must not use anything from libinterp in liboctave as it should be compiled irrespective of the former.

NOTE: You may need to re-run ./bootstrap and ./configure due to above changes.

Lastly, I would request to someone who uses Windows (My Windows ran into some problem and I’m now unable to connect to internet, I’ll reinstall it whenever I get some time) and/or Mac OS to test my implementation and report any errors.

Needless to say I’m always looking forward to constructive feedback/criticism.

## May 16, 2018

#### Week 4

Continuing further, I added more options in libcurl_wrapper.cc. As described in earlier posts, the current implementation of wiki_login.m has the JAVA’s interface to Octave in use and I need to replace it with Octave’s own implementation. So, I’ve taken two steps in this direction. Now a user is able to retrieve token when he executes wiki_login AND use the cookies that are set in a temporary .txt file to login into the api.php wiki. Currently, there’s a problem in logging in, because the following cookies are unable to get added, octave_org_session, octave_orgUserID, octave_orgUserName, octave_orgToken. I got to know about these cookies when I tried to execute the curl-cli commands for logging in.

I also understood how the HAVE_CURL macro encapsulates the curl_transfer class, i.e, if curl is available in a machine, then this class exists, else not. HAVE_CURL is a macro that is set during ./configure stage of building the software. I will be extending my work in this class essentially in the coming week.

I’ve also added the files in their appropriate directories.

A new dummy wiki has been created by Kai for testing purposes. I’ll be using this from now on.

## May 15, 2018

### Erivelton Gualter

#### Community Bonding Period

The community bonding period is over. The past 3 weeks were really busy because I was in my finals week, final projects and my doctoral research. However, I basically completed everything I wanted to before the “Coding officially begins!”:

• Finished Optimal Control and Intelligent Control System classes;
• Submitted a conference …

## May 08, 2018

#### Week 3

UPDATE on 12-May-2018: You can now directly test my work by cloning my repo, updating the source tree to my bookmark by hg up ocs, making a build and calling wiki_login from octave-cli to get a login token in return. I’ve added the files and necessary changes in the codebase itself. There’s no directory ocode now.

If you happen to already have a build of octave, just do the following:

• cd path/to/your/source/tree
• hg pull https://me_ydv_5@bitbucket.org/me_ydv_5/octave in your source tree.
• hg up ocs
• hg up -r 8bbf393
• make -jX in your build tree.

This will save your time of cloning the entire repo and compiliing.

As mentioned in the previous post, I worked on __publish_wiki_output__.m and publish.m code. The __publish_wiki_output__.m has been added as an internal function in scripts/miscellaneous/private. I skimmed through the parser in publish.m to get a gist of how it actually works. It has three levels of parsing:

1. Extract the overall structure (paragraphs and code sections).

2. Parsing the content of a paragraph.

3. Generate the output of the script code and look after figures produced in code.

After that, I studied how the url-transfer.h file is implemented which contains a base class named base_url_transfer which has a derived class named url_transder. One thing that puzzled me while doing so was, why there has to be a macro HAVE_CURL in order for curl_transfer to be defined and why we haven’t defined url_transfer class itself? I would try to get these doubts solved this week.

The problem of user agent was solved by selecting the following user agent:

    GNU Octave/OCTAVE_VERSION (https://www.gnu.org/software/octave/ ; help@octave.org) libcurl/LIBCURL_VERSION


where OCTAVE_VERSION and LIBCURL_VERSION correspond to the user’s octave and libcurl version respectively. This code precisely does the same for us.

My inteded plan for the wrapper is, make a cookie_manager.m file that will process the various user options (like verbose output, timeout settings, api.php url, etc.) and pass the values to an internal __curl__.cc function which will in turn, take help from libcurl_wrapper.cc to do various tasks (all the work related to cookies will be looked after by it, essentially).

Curently, all the code in wiki_login.m has been commented out except the first step of login, i.e, getting a login token from the api.php, which it smoothly does, as of now. I am assuming that the file which would store the cookies, is temporary and should be deleted once the session expires. This is one of the things I will be looking on in this week.

I’ve migrated all the developments from my forked git repo to my mercurial bookmark ocs recently and so I was not sure where should I put the files in my source tree. Thus, I’ve put all of them in a directory ocode for now.

To test this for yourself:

• Clone my build tree using hg clone https://me_ydv_5@bitbucket.org/me_ydv_5/octave
• Make yourself a build of octave (make -j2, etc.).
• cd octave.
• Update to my bookmark using hg up ocs. (IMPORTANT!)
• cd ocode.
• Execute Makefile in octave.
• Execute wiki_login in octave to get a login token.

All other details of the wrapper’s implementation will be followed in the next post.

• Choosing the right location for the files (after I get a green light for the current developmental path).
• Extending other options in the wrapper for wiki_login’s steps 2 and 3.
• Implementing cookie_manager with other user options.
• Writing help text and text cases, if any.
• Correction of existing work/ changing the stategy as adviced by mentor, or anyone else.
• Look into how can I use existing base_url_transfer class in the wrapper and resolve my query of the HAVE_CURL macro and shared pointers, etc.

I am optimistic that I would be able to complete my first evaluation work by 25th May or so, as I will need to focus on my end term examinations after that which will start from 1 June. We don’t get holidays in between the exams!

Please let me know if I am doing it the right way or not, by either replying to this thread or by simply dropping a message on #octave for <batterylow>. All the suggestions are always welcomed.

Oh and not to forget, I got my domain an SSL certificate, now all the requests are served via HTTPS only!

Stay tuned for next update.

## May 01, 2018

#### Week 2

Week 1 included setting up all the work environment, bitbucket repository for tracking my project’s progess and review,setting up and getting this blog aggregated to planet.octave.org. It also included reading up various files that are of concern to the project.

Week 2 would focus on getting my hands dirty in refactoring the __publish_wiki_output__.m and (possibly) publish.m code. This would also include looking up what exact methods/functions will be needed to implement the wrapper. Currently, a proof of concept is written as a bash script. My work of writing the wrapper will be highly inspired from this script.

The wrapper is written so that MediaWiki can be communicated directly using Octave and there won’t be any need to use Java’s interface to Octave or the bash script itself. To know how exactly MediaWiki API works, have a look at this nicely written post. Note that the $wgEnableAPI written in the post is now deprecated from MediaWiki’s version 1.32.0. Another thing that needs looking upon is MediaWiki needs a user agent in order for the client to be identified. So we need to decide what would be it. Stay tuned for next update! ## April 26, 2018 ### Sudeepam Pandey #### Starting with GSoC 2018. So this year, I applied to the Google summer of code and got in. Google summer of code, or GSoC, as it is usually called, is a program funded by Google that has helped open source grow for over a decade. Under this program, Google awards stipends to university students for contributing code to open source organizations during their summer breaks from the university. The details of the program can be found here: Starting with Google summer of code. Now this year, I have been selected to work with GNU Octave. It is a free and open source software/ high level programming language which is primarily focused on scientific computing. It is largely compatible with MATLAB and is a brilliant open source alternative to it. More details about GNU Octave can be found at Free your numbers! Introducing GNU Octave. My GSoC project is about adding a Command line suggestion feature to GNU Octave. Stay tuned, I will share the details of the project very soon. ## April 25, 2018 ### Sahil Yadav #### Week 1 I’ve been selected as a Google Summer of Code , 2018 student developer at GNU Octave. GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language. A very heartful thanks to Kai T. Ohlhus, Doug Stewart, Ankit Raj and others who saw my potential and chose me for GSoC 2018. My project is Octave Code Sharing. Community bonding period would include reading up material that’d be essential for the project. ## Abstract: This project aims to come up with a pan-octave implementation that could be used to connect to wiki.octave.org with appropriate credentials for publishing octave scripts that could be hosted for distribution using the MediaWiki API. Currently, no formal implementation is there but a proof of concept which was implemented as a bash script, later refactored to use JAVA’s interface to Octave. Since the network connection itself has latency and response time which are significantly large to get into notice, an Octave script, which will help in connection, will not be much of a performance killer. To maintain a stateless HTTP protocol, some information that would be needed will be stored as cookies with the help of Libcurl library. All this would lead to set up of RESTful services for GNU Octave which could be further extended to support the compatibility with MATLAB’s RESTful interface. ## Timeline: 1. April 23 - May 14 Community Bonding Period: Get to know more about Libcurl and its implementation. Learn about HTTP request headers and stateless transfer protocol. Finalise location of files in the codebase that will be needed for code sharing. Study the existing work that has been done by the mentor. Study publish() and grabcode() functions. Study octave::url_transfer class. Learn about MediaWiki API and its implementation (backend working). 2. May 15 - May 23 Implement the Libcurl wrapper in libcurl_wrapper.cc/h. This would include the various abstractions such as ‘ALL’, ‘SESS’, ‘FLUSH’, ‘RELOAD’ that could be used by octave’s script to use Libcurl cookies to connect to the server. There’s a preliminary implementation for the same which was implemented by the mentor sometime back, which can be extended in this phase for a more general design. 3. May 23 - May 31 Add ‘wiki’ as an ‘output_format’ in publish() function. This would extend the implementation of private function __ publish_wiki_output__.m that is to be used to format a wiki published code. 4. June 1 - June 10 Non Coding period due to University major examinations. 5. June 11 - June 15 First Phase Evaluation - Write Documentation of the work done upto this point and other tests such as setting up the wiki installation environment for testing the script that would be implemented in point 6. Catch up with the work if anything is lagging behind. 6. June 15 - June 30 Implement wiki_login.m which would use the Libcurl wrapper implemented in point 2. Currently this file uses Java’s interface to octave. But with the wrapper, the file would become an implementation of RESTful services for ‘code sharing’ and will not be dependent on Java’s interface to octave. 7. July 1 - July 9 Implement ‘weboptions’ function. It will return a default ‘weboptions’ object to specify parameters for a request to web service. This function will use the libCURL wrapper that would’ve been already implemented. Current scope of the function would be restricted to four options, viz., ‘Username’, ‘password’, ‘Keyname’, ‘KeyValue’. 8. July 10 - July 13 Second Evaluation Phase - Write documentation of the work done upto here and other tests required for ‘weboptions’ function. Catch up with any previous work if left. 9. July 14 - July 20 Implement ‘webread’ function. This will read content from a web service and return data formatted as text. Other output arguments (cmap, alpha) will not be supported currently. 10. July 21 - July 27 Implement ‘webwrite’ function. This will put data in the body of an HTTP POST request to the web service. 11. July 28 - August 6 Buffer period for anything that remains. Complete documentation and testing for ‘webwrite’ and ‘webread’. For now, libcurl_wrapper.cc/h would be placed in libinterp/corefcn but could be changed whether it should be used by default or on the user’s choice because someone might not want to connect themselves with wiki.octave.org. I’ve for now, merged two weeks, i.e from 15 June to 30 June because the main task would be include the wrapper implementation for connecting to the webserver. This may span upto two weeks and so is the reason for two weeks of the same task. ### Additional Things to be done after GSoC is over: The web function is already partially implemented. The task will be to finish the implementation with various arguments such as ‘-notoolbar’, ‘-noaddressbox’, ‘-new’ for MATLAB’s compatibility. ‘websave’, ‘ftp’ and ‘sendmail’ that are a part of RESTful services, will also be implemented. Any other part of GNU octave which currently might need RESTful services using cookies can be amended to use the implementation that would result from the project. The ‘webread’ function will be extended to read values into JSON format and images as well. Thanks for reading, will be posting next update soon. Looking forward to a bug-free and code-some summer. :-) #### Week 1 I’ve been selected as a Google Summer of Code , 2018 student developer at GNU Octave. GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language. A very heartful thanks to Kai T. Ohlhus, Doug Stewart, Ankit Raj and others who saw my potential and chose me for GSoC 2018. My project is Octave Code Sharing. Community bonding period would include reading up material that’d be essential for the project. ## Abstract: This project aims to come up with a pan-octave implementation that could be used to connect to wiki.octave.org with appropriate credentials for publishing octave scripts that could be hosted for distribution using the MediaWiki API. Currently, no formal implementation is there but a proof of concept which was implemented as a bash script, later refactored to use JAVA’s interface to Octave. Since the network connection itself has latency and response time which are significantly large to get into notice, an Octave script, which will help in connection, will not be much of a performance killer. To maintain a stateless HTTP protocol, some information that would be needed will be stored as cookies with the help of Libcurl library. All this would lead to set up of RESTful services for GNU Octave which could be further extended to support the compatibility with MATLAB’s RESTful interface. ## Timeline: 1. April 23 - May 14 Community Bonding Period: Get to know more about Libcurl and its implementation. Learn about HTTP request headers and stateless transfer protocol. Finalise location of files in the codebase that will be needed for code sharing. Study the existing work that has been done by the mentor. Study publish() and grabcode() functions. Study octave::url_transfer class. Learn about MediaWiki API and its implementation (backend working). 2. May 15 - May 23 Implement the Libcurl wrapper in libcurl_wrapper.cc/h. This would include the various abstractions such as ‘ALL’, ‘SESS’, ‘FLUSH’, ‘RELOAD’ that could be used by octave’s script to use Libcurl cookies to connect to the server. There’s a preliminary implementation for the same which was implemented by the mentor sometime back, which can be extended in this phase for a more general design. 3. May 23 - May 31 Add ‘wiki’ as an ‘output_format’ in publish() function. This would extend the implementation of private function __ publish_wiki_output__.m that is to be used to format a wiki published code. 4. June 1 - June 10 Non Coding period due to University major examinations. 5. June 11 - June 15 First Phase Evaluation - Write Documentation of the work done upto this point and other tests such as setting up the wiki installation environment for testing the script that would be implemented in point 6. Catch up with the work if anything is lagging behind. 6. June 15 - June 30 Implement wiki_login.m which would use the Libcurl wrapper implemented in point 2. Currently this file uses Java’s interface to octave. But with the wrapper, the file would become an implementation of RESTful services for ‘code sharing’ and will not be dependent on Java’s interface to octave. 7. July 1 - July 9 Implement ‘weboptions’ function. It will return a default ‘weboptions’ object to specify parameters for a request to web service. This function will use the libCURL wrapper that would’ve been already implemented. Current scope of the function would be restricted to four options, viz., ‘Username’, ‘password’, ‘Keyname’, ‘KeyValue’. 8. July 10 - July 13 Second Evaluation Phase - Write documentation of the work done upto here and other tests required for ‘weboptions’ function. Catch up with any previous work if left. 9. July 14 - July 20 Implement ‘webread’ function. This will read content from a web service and return data formatted as text. Other output arguments (cmap, alpha) will not be supported currently. 10. July 21 - July 27 Implement ‘webwrite’ function. This will put data in the body of an HTTP POST request to the web service. 11. July 28 - August 6 Buffer period for anything that remains. Complete documentation and testing for ‘webwrite’ and ‘webread’. For now, libcurl_wrapper.cc/h would be placed in libinterp/corefcn but could be changed whether it should be used by default or on the user’s choice because someone might not want to connect themselves with wiki.octave.org. I’ve for now, merged two weeks, i.e from 15 June to 30 June because the main task would be include the wrapper implementation for connecting to the webserver. This may span upto two weeks and so is the reason for two weeks of the same task. ### Additional Things to be done after GSoC is over: The web function is already partially implemented. The task will be to finish the implementation with various arguments such as ‘-notoolbar’, ‘-noaddressbox’, ‘-new’ for MATLAB’s compatibility. ‘websave’, ‘ftp’ and ‘sendmail’ that are a part of RESTful services, will also be implemented. Any other part of GNU octave which currently might need RESTful services using cookies can be amended to use the implementation that would result from the project. The ‘webread’ function will be extended to read values into JSON format and images as well. Thanks for reading, will be posting next update soon. Looking forward to a bug-free and code-some summer. :-) ## April 23, 2018 ### Erivelton Gualter #### Welcome to Octave and Google Summer of Code This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community … #### Welcome to Octave and Google Summer of Code This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community … ## March 06, 2018 ### Jordi Gutiérrez Hermoso #### Advent of D I wrote my Advent of Code in D. The programming language. It was the first time I used D in earnest every day for something substantial. It was fun and I learned things along the way, such as easy metaprogramming, concurrency I could write correctly, and functional programming that doesn’t feel like I have one arm tied behind my back. I would do it all over again. My main programming languages are C++ and Python. For me, D is the combination of the best of these two: the power of C++ with the ease of use of Python. Or to put it another way, D is the C++ I always wanted. This used to be D’s sales pitch, down to its name. There’s lots of evident C++ heritage in D. It is a C++ successor worthy of consideration. # Why D? This is the question people always ask me. Whenever I bring up D, I am faced with the following set of standard rebuttals: • Why not Rust? • D? That’s still around? • D doesn’t bring anything new or interesting • But the GC… I’ll answer these briefly: D was easier for me to learn than Rust, yes, it’s still around and very lively, it has lots of interesting ideas, and what garbage collector? I guess there’s a GC, but I’ve never noticed and it’s never gotten in my way. I will let D speak for itself further below. For now, I would like to address the “why D?” rebuttals in a different way. It seems to me that people would rather not have to learn another new thing. Right now, Rust has a lot of attention and some of the code, and right now it seems like Rust may be the solution we always wanted for safe, systems-level coding. It takes effort to work on a new programming language. So, I think the “why D?” people are mostly saying, “why should I have to care about a different programming language, can’t I just immediately dismiss D and spend time learning Rust instead?” I posit that no, you shouldn’t immediately dismiss D. If nothing else, try to listen to its ideas, many of which are distilled into Alexandrescu’s The D Programming Language. I recommend this book as good reading material for computer science, even if you never plan to write any D (as a language reference itself, it’s already dated in a number of ways, but I still recommend it for the ideas it discusses). Also browse the D Gems section in the D tour. In the meantime, let me show you what I learned about D while using it. # Writing D every day for over 25 days I took slightly longer than 25 days to write my advent of code solutions, partly because some stumped me a little and partly because around actual Christmas I wanted to spend time with family instead of writing code. When I was writing code, I would say that nearly every day of advent of code forced me to look into a new aspect of D. You can see my solutions in this Mercurial repository. I am not going to go too much into details about the abstract theory concerning the solution of each problem. Perhaps another time. I will instead focus on the specific D techniques I learned about or found most useful for each. # Contents ## Day 1: parsing arguments, type conversions, template constraints For Day 1, I was planning to be a bit more careful about everything around the code. I was going to carefully parse CLI arguments, produce docstrings and error messages when anything went wrong, and carefully validate template arguments with constraints (comparable to concepts in C++). While I could have done all of this, as days went by I tried to golf my solutions, so I abandoned most of this boilerplate. Instead, I lazily relied on getting D stack traces at runtime or compiler errors when I messed up. As you can see from my solution, had I kept it up, the boilerplate isn’t too bad, though. Template constraints are achieved by adding if(isNumeric!numType), which checks at compile time that my template was given a template argument of the correct type, where isNumeric comes from import std.traits. I also found that getopt was a sufficiently mature standard library for handling command-line parsing. It’s not quite as rich as Python’s argparse, merely sufficient. This about shows all it can do: string input; auto opts = getopt( args, "input|i", "Input captcha to process", &input ); if (opts.helpWanted) { defaultGetoptPrinter("Day 1 of AoC", opts.options); } Finally, a frequent workhorse that appeared from Day 1 was std.conv for parsing strings into numbers. A single function, to is surprisingly versatile and does much more than that, by taking a single template argument for converting (not casting) one type into another. It knows not only how to parse strings into numbers and vice versa, but also how to convert numerical types keeping as much precision as possible or reading list or associative array literals from strings if they are in their standard string representation. It’s a good basic example of D’s power and flexibility in generic programming. ## Day 2: functional programming and uniform function call syntax For whatever reason, probably because I was kind of trying to golf my solutions, I ended up writing a lot of functionalish code, with lots of map, reduce, filter, and so forth. This started early on with Day 2. D is mostly unopinionated about which style of programming one should use and offers tools to do object-orientation, functional programming, or just plain procedural programming, presenting no obstacle to the mixing these styles. Lambdas are easily written inline with concise syntax, e.g. x => x*x, and the basic standard functional tools like map, reduce, filter and so on are available. D’s approach to functional programming is quite pragmatic. While I rarely used it, because I wasn’t being too careful for these solutions, D functions can be labelled pure, which means that they can have no side effects. However, this still lets them do local impure things such as reassigning a variable or having a for loop. The only restriction is that all of their impurity must be “on the stack”, and that they cannot call any impure functions themselves. Another feature that I came to completely fall in love with was what they call uniform function call syntax (UFCS). With some caveats, this basically means that foo.bar(baz) is just sugar for bar(foo, baz) If the function only has one argument, the round brackets are optional and foo.bar is sugar for bar(foo). This very basic syntactic convenience makes it so easy and pleasant to chain function calls together, lending itself to making it more inviting to write functional code. It also is a happy unification between OOP and FP, because syntactically it’s the same to give an object a new member function as it is to create a free-standing function whose first argument is the object. ## Day 3: let’s try some complex arithmetic! For me, 2-dimensional geometry is often very well described by complex numbers. The spiral in the problem here seemed easy to describe as an associative array from complex coordinates to integer values. So, I decided to give D’s std.complex a try. It was easy to use and there were no big surprises here. ## Day 4: reusing familiar tools to find duplicates There weren’t any new D techniques here, but it was nice to see how easy it was to build a simple word counter from D builtins. Slightly disappointed that this data structure itself wasn’t builtin like Python’s own collections.Counter but hardly an insurmountable problem. ## Day 5: more practice with familiar tools Again, not much new D here. I like the relative ease with which it’s possible to read integers into a list using map and std.conv.to. ## Day 6: ranges There’s usually a fundamental paradigm or structure in programming languages out of which everything else depends on. Haskell has functions and monads, C has pointers and arrays, C++ has classes and templates, Python has dicts and iterators, Javascript has callbacks and objects, Rust has borrowing and immutability. Ranges are one of D’s fundamental concepts. Roughly speaking, a range is anything that can be iterated over, like an array or a lazy generator. Thanks to D’s powerful metaprogramming, ranges can be defined to satisfy a kind of compile-time duck typing: if it has methods to check for emptiness, get the first element, and get the next element, then it’s an InputRange. This duck typing is kind of reminiscent of type classes in Haskell. D’s general principle of having containers and algorithms on those containers is built upon the range concept. Ranges are intended to be simpler reformulation of iterators from the C++ standard libary. I have been using ranges all along, as foreach loops are kind of like sugar for invoking those methods on ranges. However, for day 6 I actually wanted to use a method that had to invoke an std.range method, enumerate. It simply iterates over a range while simultaneously producing a counter. This I used to write some brief code to obtain both the maximum of an array and the index in which it occurs. Another range-related feature that appears for the first time here is slicing. Certain random-access ranges which allow integer indexing also allow slicing. The typical method to remove elements from an array is to use this slicing. For example, to remove the first five elements and the last two elements from an array: arr = arr[5..$-2];

Here the dollar sign is sugar for arr.length and this removal is simply done by moving some start and end pointers in memory — no other bytes are touched.

The D Tour has a good taste of ranges and Programming in D goes into more depth.

## Day 7: structs and compile-time regexes

My solution for this problem was more complicated, and it forced me to break out an actual tree data structure. Because I wasn’t trying to be particularly parsimonious about memory usage or execution speed, I decided to create the tree by having a node struct with a global associative array indexing all of the nodes.

In D, structs have value semantics and classes have reference semantics. Roughly, this means that structs are on the stack, they get copied around when being passed into functions, while classes are always handled by reference instead and dynamically allocated and destroyed. Another difference between structs and classes is that classes have inheritance (and hence, polymorphic dispatch) but structs don’t. However, you can give structs methods, and they will have an implicit this parameter, although this is little more than sugar for free-standing functions.

Enough on OOP. Let’s talk about the really exciting stuff: compile-time regular expressions!

For this problem, there was some input parsing to do. Let’s look at what I wrote:

void parseLine(string line) {
static nodeRegex = regex(r"(?P<name>\w+) $$(?P<weight>\d+)$$( -> (?P<children>[\w,]+))?");
auto row = matchFirst(line, nodeRegex);
// init the node struct here
}

The static keyword instructs D that this variable has to be computed at compile-time. D’s compiler basically has its own interpreter that can execute arbitrary code as long as all of the inputs are available at compile time. In this case, this parses and compiles this regex into the binary. The next line, where I call matchFirst on each line, is done at runtime, but if for whatever reason I had these strings available at compile time (say, defined as a big inline string a few lines above the same source file), I could also do the regex parsing at compile time if I wanted to.

This is really nice. This is one of my favourite D features. Add a static and you can precompute into your binary just about anything. You often don’t even need any extra syntax. If the compiler realises that it has all of the information at compile time to do something, it might just do it. This is known as compile-time function execution, hereafter, CTFE. The D Tour has a good overview of the topic.

## Day 8: more compile-time fun with mixin

Day 8 was another problem where the most interesting part was parsing. As before, I used a compile-time regex. But the interesting part of this problem was the following bit of code for parsing strings into their corresponding D comparison operation, as I originally wrote it:

auto comparisons = [
"<": function(int a, int b) => a  < b,
">": function(int a, int b) => a > b,
"==": function(int a, int  b) => a == b,
"<=": function(int a, int b) => a <= b,
">=": function(int a, int b) => a >= b,
"!=": function(int a, int b) => a  != b,
];

Okay, this isn’t terrible. It’s just… not very pretty. I don’t like that it’s basically the same line repeated six times. I furthermore also don’t like that within each line, I have to repeat the operator in the string part and in the function body. Enter the mixin keyword! Basically, string mixins allow you to evaluate any string at compile time. They’re kind of like the C preprocessor, but much safer. For example, string mixins only evaluate complete expressions, so no shenanigans like #define private public are allowed. My first attempt to shorten the above looked like this:

bool function(int,int)[string] comparisons;
static foreach(cmp; ["<", ">", "==", "<=", ">=", "!="]) {
comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
}

Since I decided to use a compile-time static loop to populate my array, I now needed a separate declaration of the variable which forced me to spell out its ungainly type: an associative array that takes a string and returns a function with that signature. The mixin here takes a concatenated string that evaluates to a function.

However, this didn’t work for two reasons!

The first one is that static foreach was introduced on September 2017. The D compilers packaged in Debian didn’t have it yet when I wrote that code! The second problem is more subtle: initialisation of associative arrays currently cannot be statically done because their internal data structures rely on runtime computations, according to my understanding of this discussion. They might fix it some day?

So, next best thing is my final answer:

bool function(int,int)[string] comparisons;

auto getComparisons(Args...)() {
foreach(cmp; Args) {
comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
}
return comparisons;
}

shared static this() {
comparisons = getComparisons!("<", ">", "==",  "<=", ">=", "!=");
}

Alright, by size this is hardly shorter than the repetitive original. But I still think it’s better! It has no dull repetition where bugs are most often introduced, and it’s using a variable-argument templated function so that the mixin can have its values available at compile time. It uses the next best thing to compile-time initialisation, which is a module initialiser shared static this() that just calls the function to perform the init.

## Day 9: a switch statement!

Day 9 was a simpler parsing problem, so simple that instead of using a regex I decided to just use a switch statement. There isn’t anything terribly fancy about switch statements, and they work almost exactly the same as they do in other languages. The only distinct features of switch statements in D is that they work on numeric, string, or bool types and that they have deprecated implicit fallthrough. Fallthrough instead must be explicitly done with goto case; or will be once the deprecation is complete.

Oh, and you can also specify ranges for a case statement, e.g.

case 'a': .. case 'z':
// do stuff with lowercase ASCII
break;

It’s the small conveniences that make this pleasant. Programming in D has a good discussion on switch statements.

## Day 10: learning what ranges cannot do

So, superficially, you might think that expressions like arr[2..$-2], which is valid, would also allow for things like arr[$-2..1] to traverse the array in reverse order or some other syntax for having a different step size than +1. At least I did. These kinds of array indexing are common in numeric-based arrays such as Octave, Julia, R, or Python’s numpy. So for day 10’s hash, which requires reversing an array, I thought I could just do that.

Turns out that the language doesn’t have syntax to allow this, but after a quick trip to the standard library I found the necessary functions. What I thought could be written as

arr[a..b] = arr[b..a];

reverse(arr[a..b]);

Other than this minor discovery about ranges, Day 10 was more about getting the algorithm right than using any specialised D utilities. Since real hashes typically allow several sizes, I templated the hash functions with the total size, rounds of hashing, and chunk size, with a template constraint that the chunk size must divide the total size:

auto getHash(int Size=256, int Rounds=64, int ChunkSize=16)(string input)
if( Size % ChunkSize == 0)
{
// ...
}

Nothing new here. I just like that template constraints are so easy to write.

## Day 11: offline hex coding

I did most of Day 11 on paper. It took me a while to figure out a proper hex coordinate system and what the distance function in that coordinate system should be. I had seen hex coordinates from playing Battle for Wesnoth, but took me a while to figure them out again. Once I had that, the actual D code is pretty simple and used no techniques I hadn’t seen before. I think this is the first time I used the cumulativeFold function, but other than that, nothing to see here. An immutable global associative array populated at module init time like before,

pure static this(){
directions = [
"ne": [1,1],
"n": [0,1],
"nw": [-1,0],
"sw": [-1,-1],
"s": [0,-1],
"se": [1,0],
];
}

and that’s it.

## Day 12: for want of a set

The only new D technique for this problem was that I decided to use a set structure to keep track of which graph nodes had been visited. The only problem is that D doesn’t have a built-in set structure (yet?), but it does have a setDifference function. It’s a bit clunky. It only works on ordered arrays, but that was sufficient for my purpose here, and probably not much worse than hashing with a traditional set structure would have been.

One further observation: D has an in keyword, which can be used to test membership, like in Python (it also has an unrelated use for defining input and output arguments to functions), but unlike Python, only for associative arrays. This makes sense, because the complexity of testing for membership for other data structures can vary widely depending on the structure and the chosen algorithm, and there isn’t a clear universal choice like there is for associative arrays.

If desired, however, it’s possible to define the in operator for any other class, like so:

bool opBinaryRight!("in")(T elt) {
// check that elt is in this
}

I would assume that’s what you could use to write a set class for D.

## Day 13: more offline coding

This one is another where I did most of the solution on paper and thus managed to write a very short program. No new D techniques here, just the usual functionalish style that I seem to be developing.

## Day 14: reusing older code as a module

The problem here is interesting because I’ve solved this labelling of connected components problem before in C++ for GNU Octave. I wrote the initial bwlabeln implementation using union-find. I was tempted to do the same here, but I couldn’t think of a quick way to do so, and talking to others in the #lobsters channel in IRC, I realised that a simpler recursive solution would work without overflowing the stack (because the problem is small enough, not because a stack-based algorithm is clever).

The interesting part is reusing an earlier solution, the hashing algorithm from Day 10. At first blush, this is quite simple: every D file also creates its own module, namespaced if desired by directories. It’s very reminiscent of Python’s import statement and module namespacing. The only snag is that my other file has a void main(string[] args) function and so does this one. The linker won’t like that duplicate definition of symbols. For this purpose, D offers conditional compilation, which in C and C++ is usually achieved via a familiar C preprocessor macro idiom.

In D, this idiom is codified into the language proper via the version kewyord, like so

version(standalone) {
void main(string[] args){
// do main things here
}
}

This instructs the compiler to compile the inside of the version block only if an option called “standalone” is passed in,

gdc -O2 -fversion=standalone app.d -o day10

or, with regrettably slightly different flags,

ldc2 -O2 -d-version=standalone app.d -of day10

There are other built-in arguments for version, such as “linux” or “OSX” to conditionally compile for a particular operating system. This keyword offers quite a bit of flexibility for conditional compilation, and it’s a big improvement over C preprocessor idioms.

## Day 15: generators, lambdas, functions, and delegates

This problem was an opportunity to test out a new function, generate, which takes a function and iterates it repeatedly on a range. Haskell calls this one iterate, which I think is a better name. It’s also a lazy generator, so you need something like take to say how much of the generator do you want to use. For example, the Haskell code

pows = take 11 $iterate (\x -> x*2) 1 can be translated into D as auto x = 1; auto pows = generate!(x => x*2).take(11); There are other examples in the documentation. Let also take a moment here to talk about the different anonymous functions in D. The following both declare a function that squares its input: function(int a) { return a^^2;} delegate(int a) { return a^^2;} The difference is just a question of closure. The delegate version carries a hidden pointer to its enclosing scope, so it can dynamically close over the outer scope variables. If you can’t afford to pay this runtime penalty, the function version doesn’t reference the enclosing scope (no extra pointer). So, for a generator, you typically want to use a delegate, since you want the generator to remember its scoped variables across successive calls, like what I did: auto generator(ulong val, ulong mult) { return generate!(delegate(){ val = (val * mult) % 2147483647; return val; } ); } This function returns a generator range where each entry will result in a new entry of this pseudorandom linear congruence generator. The delegate/function is part of the type, and can be omitted if it can be inferred by context (e.g. when passing a function into another function as an argument). Furthermore, there’s a lambda shorthand that I have been using all along, where the {return foo;} boilerplate can be shortened to just => like so: (a) => a^^2 This form is only valid where there’s enough context to infer if it’s a delegate or a function, as well as the type of a itself. More details in the language spec. ## Day 16: permutations with primitive tools This permutations problem made me reach for the std.algorithm function bringToFront for cyclicly permuting an array in place like so, bringToFront(progs[rot..$], progs[0..rot]);

It’s a surprisingly versatile function that can be used to perform more tricks than cyclic permutations. Its documentation is worth a read.

I also ran into a D bug here. I had to create a character array from an immutable input string, but due to Unicode-related reasons that D has for handling characters especially, I had to cast to ubyte[] instead of char[].

Besides that, for the second part where you had to realise that permutations cannot have too big of an orbit, I also ended up using a string array with the canFind from std.algorithm. I would have preferred a string set with hashing instead of linear searching, but it didn’t make a huge difference for the size of this problem.

I really want sets in the D standard library. Maybe I should see what I can do to make them happen.

## Day 17: avoiding all the work with a clever observation

This puzzle is a variation of the Josephus problem. I needed some help from #lobsters in IRC to figure out how to solve it. There aren’t any new D techniques, just some dumb array concatenation with the tilde operator for inserting elements into an array:

## June 20, 2017

### Michele Ginesi

#### Timetable: modification

Timetable: modification

# Timetable: modification

According to my timetable (that you can find here), during this last week of June, I should've work on the input validation of betainc. Since a new bug related to this function has been found and, moreover, the actual implementation doesn't accept the "lower" or "upper" tail (as MATLAB do), me and my mentor decided to use this week to start studying how to rewrite betainc (main references will be [1] and [2]) and to use the last part fo the GSoC to actually implement it. In this way, my timetable remain almost identical (I will use July to work on Bessel functions) and I will be able to fix also this problem.

[1] Abramowitz, Stegun "Handbook of Mathematical Functions"
[2] Cuyt, Brevik Petersen, Vendonk, Waadeland "Handbook of Continued Fractions for Special Functions"