Since I’ve first used MATLAB I have dreamt of finding a replacement for it. Not only it is expensive, proprietary software, but also a terrible programming language. Don’t get me wrong, I’m sure it was amazing when it was invented, but this was in the 70s. We know better now. I’ve had to deal with so many fascinating bugs due to its poor design decisions!
Most recently, I had code that was failing because 'asdf'
and "asdf"
are very similar, but not exactly the same. The former is a character vector, and the latter is a string. Almost always you can use them interchangeably, but as it turns out, not always. Another insane design decision is that you don’t need to declare variables to work on them. I declared a matrix called constraints
, worked on it a bit, and then made an assignment with a typo contraints(:,1) = v
. Instead of throwing an error like any sane programming language, MATLAB just silently created a new variable contraints
. Perhaps more seriously, MATLAB does not support namespaces. If you are using two packages that both define a function called square
, you have to be careful about the order in which they appear in the MATLAB path to get the correct one. If you need both versions? You’re just out of luck.
Perhaps I should stop ranting at this point, but I just can’t. Another thing that drives me mad is that loop indices are always global, so you must be very careful about reusing index names. This interacts greatly with another “feature” of MATLAB, that i
is both the imaginary unity and a valid variable name. If you have for i=1:3, i, end
followed by a = 2+3*i
you’re not getting a complex number, you’re getting 11. The parser is downright stone age, it can’t handle simple operators like +=
, or double indexing like a(2)(4)
. To vectorize a matrix there’s no function, just the operator :
, so if you want to vectorize a(2)
you have to either call reshape(a(2),[],1)
, or define x = a(2)
and then do x(:)
. Which of course leads to everyone and their dog defining a function vec()
for convenience, which then all conflict with each other because of the lack of namespaces.
I wouldn’t be surprised to find out that the person that designed the function length()
tortured little animals as a child. If you call it on a vector, it works as expected. But if you call it on an $m \times n$ matrix, what should it do? I think the most sensible option is to give the number of elements, $mn$, but it’s also defensible to give $m$ or $n$. MATLAB takes of course the fourth option, $\max(m,n)$. I could also mention the lack of support for types, Kafkaesque support for optional function arguments, mixing row and column vectors… It would keep me ranting forever. But enough about MATLAB. What are the alternatives?
The first one I looked at is Octave. It is open source, great, but its fundamental goal is to be compatible with MATLAB, so it cannot fix its uncountable design flaws. Furthermore, it isn’t 100% compatible with MATLAB, so almost always when I have to use MATLAB because of a library, this library doesn’t work with Octave. If I give up on compatibility, then I can use the Octave extensions that make the programming language more tolerable. But it’s still a terrible programming language, and is even slower than MATLAB, so there isn’t much point.
Then came Python. No hope of compatibility here, but I accept that, no pain no gain. The language is a joy to program with, but I absolutely need some optimization libraries (that I’m not going to write myself). There are two available, CVXPY and PICOS. Back when I first looked at them, about a decade ago, neither supported complex numbers, so Python was immediately discarded. In the meanwhile they have both added support, so a few years ago I gave it a shot. It turns out both are unbearably slow. CVXPY gets an extra negative point for demanding its own version of partial trace and partial transposition, but that’s beside the point, I can’t use them for any serious problem anyway. I did end up publishing a paper using Python code, but it was only because the optimization problem I was solving was so simple that performance wasn’t an issue.
After that I gave up for several years, resigned to my fate of programming with MATLAB until it drove me to suicide. But then Sébastien Designolle came to visit Vienna, and told me of a programming language called Julia that was even nicer to program than Python, almost as fast as C++, and had an optimization library supporting every solver under the Sun, JuMP. I couldn’t believe my ears. Had the promised land been there all along? After all, I knew Julia, it just had never occurred to me that I could do optimization with it.
I immediately asked Sébastien if it supported complex numbers, and if it needed funny business to accept partial transpositions. Yes, and no, respectively. Amazing! To my relief JuMP had just added support for complex numbers, so I hadn’t suffered all these years for nothing. I started testing the support for complex numbers, and it turned out to be rather buggy. However, the developers Benoît Legat and Oscar Dowson fixed the bugs as fast as I could report them, so now it’s rock solid. Dowson in particular seemed to never sleep, but as it turned out he just lives in New Zealand.
Since then I have been learning Julia and writing serious code with it, and I can confirm, the language is all that Sébastien promised and more. Another big advantage is the extensive package ecosystem, where apparently half of academia has been busy solving the problems I need. The packages can be easily installed from within Julia itself and have proper support for versions and dependencies. Also worth mentioning is the powerful type system, that makes it easy to write functions that work differently for different types, and switch at runtime between floats and complex floats and double floats and quadruple floats and arbitrary precision floats. This makes it easy to do optimization with arbitrary precision, as JuMP in fact allows for the solvers that support it (as far as I know they are Hypatia, COSMO, and Clarabel). As you might know this is a nightmare in MATLAB.
Now Julia is not perfect. It has some design flaws. Some are because it wants to be familiar to MATLAB users, such as having 1-indexed arrays and col-major orientation. Some are incomprehensible (why is Real
not a subtype of Complex
? Why is M[:,1]
a copy instead of a view?). It’s not an ideal language, it’s merely the best that exists. Maybe in a couple of decades someone will release a 0-indexed version called Giulia and we’ll finally have flying cars and world peace.
It’s a bit ironic to write this blog post after I have released a paper that is based on a major MATLAB library that I wrote together with Andy, Moment. In my defence, Andy wrote almost all of the code, the vast majority is in C++, and we started it before Sébastien’s visit. And it demonstrated beyond any doubt that MATLAB is completely unsuitable for any serious programming. I promise that when I have time (haha) I’ll rewrite it in Julia.
But the time for irony is over. My new projects are all in Julia, and I’ll start releasing them very soon. In the meanwhile, I wrote a tutorial to help refugees from MATLAB to settle in the promised land.
EDIT: Because of Moment I had to program in MATLAB again now, 7 months after this post. A nightmare. It took me a bloody week to find a bug in a large program. As it turns out, MATLAB gets very creative when you divide an integer by a float. I think every other programming language in the world gives you a float as an answer. MATLAB gives an integer. Which integer, though? Not the floor or the ceiling of the division, no, that would be too simple and predictable. MATLAB rounds to the nearest integer. The result of these two fascinating design decisions is that floor(5/2)
is 3 when 5 is encoded as an integer. I can hear the madman who designed this cackling hysterically alone in the night thinking of the curse he laid down for generations of programmers.