![](/static/253f0d9/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/58d50573-119a-4329-a0e8-2b41bfc3bd30.png)
Saving this for the next “Year of desktop Linux” discussion 😄
Glad you recovered it.
Saving this for the next “Year of desktop Linux” discussion 😄
Glad you recovered it.
IMO Julia just had way too many big issues to gain critical mass:
Copied 1-based indexing from MATLAB. Why? We’ve known that’s the worse option for decades.
For ages it had extremely slow startup times. I think because it compiles everything from C, but even cached it would take like 20s to load the plotting library. You can start MATLAB several times in that time. I believe they improved this fairly recently but they clearly got the runtime/compile time balance completely wrong for a research language.
There’s an article somewhere from someone who was really on board with Julia about all the issues that made them leave.
I still feel like there’s space for a MATLAB replacement… Hopefully someone will give it a better attempt at some point.
Anything that helps scientists and engineers move away from MATLAB is welcome.
The MATLAB language may be pretty bad but IMO that’s not what makes MATLAB good. Rather it’s:
Every signal processing / maths function is available and well documented. I don’t know how well Julia does on this but I know I wouldn’t want to use Python for the kinds of things I used MATLAB for (medical imaging). You don’t have to faff with pip to get a hilbert transform or whatever…
The plotting functionality is top notch. You can easily plot millions of points and it’s fast and responsive. Loads of plotting options. I just haven’t found anything that comes close. Every other option I’ve tried (a lot) only works for small datasets.
I am indeed thinking of CPython because a) approximately nobody uses PyPy, and b) this article is about CPython!!
In any case, PyPy is only about 4x faster than CPython on average (according to their own benchmarks) so it’s only going to be able to compete with C++ in random specifics circumstances, not in general.
And PyPy still has a GIL! Come on dude, think!
Yeah exactly. You made it faster through algorithmic improvement. Like for like Python is far far slower than C++ and it’s impossible to write Python that is as fast as C++.
Sure but that’s not relevant to the current discussion. The point is that removing the GIL doesn’t affect Numpy because Numpy is written in C.
Pip and venv have been tools that I’ve found to greatly accelerate dev setup and application deployment.
I’m not saying pip and venv are worse than not using them. They’re obviously mandatory for Python development. I mean that compared to other languages they provide a pretty awful experience and you’ll constantly be fighting them. Here’s some examples:
uv
which is written in Rust and consequently is about 10x faster (57s to 7s in my case).pip install --conf-settings editable-mode=compat --editable ./mypackage
. How user friendly. Apparently when they changed how editable packages were installed they were warned that it would break all static tooling but did it anyway. Good job guys.There’s so much more but this is just what I can remember off the top of my head. If you haven’t run into these things just be glad your Python usage is simple enough that you’ve been lucky!
I’m actually in the process of making such a push where I’m at, for the first time in my career
Good luck!
Python is written in C too, what’s your point?
The point is that eliminating the GIL mainly benefits pure Python code. Numpy is already multithreaded.
I think you may have forgotten what we’re talking about.
the new python version was less than 50 lines and was developed in an afternoon, the c++ version was closing in on 1000 lines over 6 files.
That’s a bit suss too tbh. Did the C++ version use an existing library like Eigen too or did they implement everything from scratch?
The only interpreted language that can compete with compiled for execution speed is Java
“Interpreted” isn’t especially well defined but it would take a pretty wildly out-there definition to call Java interpreted! Java is JIT compiled or even AoT compiled recently.
it can be blazingly fast
It definitely can’t.
It would still be blown out of the water by similarly optimized compiled code
Well, yes. So not blazingly fast then.
I mean it can be blazingly fast compared to computers from the 90s, or like humans… But “blazingly fast” generally means in the context of what is possible.
Port component to compiled language
My extensive experience is that this step rarely happens because by the time it makes sense to do this you have 100k lines of Python and performance is juuuust about tolerable and we can’t wait 3 months for you to rewrite it we need those new features now now now!
My experience has also shown that writing Python is rarely a faster way to develop even prototypes, especially when you consider all the time you’ll waste on pip and setuptools and venv…
numpy
Numpy is written in C.
Numba
Numba is interesting… But a) it can already do multithreading so this change makes little difference, and b) it’s still not going to be as fast as C++ (obviously we don’t count the GPU backend).
Unless the C++ code was doing something wrong there’s literally no way you can write pure Python that’s 10x faster than it. Something else is going on there. Maybe the c++ code was accidentally O(N^2) or something.
In general Python will be 10-200 times slower than C++. 50x slower is typical.
threading bugs are sometimes hard to catch
Putting it mildly! Threading bugs are probably the worst class of bugs to debug
Definitely debatable if this is worth the risk of impossible bugs. Python is very slow, and multi threading isn’t going to change that. 4x extremely slow is still extremely slow. If you care remotely about performance you need to use a different language anyway.
Meanwhile current AI is pretty much useless for any purpose where you actually need to rely on a decent chance to get quality results without human review.
Sure but there are tons of applications where you can tolerate lower than human levels of performance.
The amount of time ChatGPT has saved me programming is crazy, even though it struggles with more complex or niche tasks.
Here’s what I used it for most recently:
Write an HTML page that consists of a tree of <details> elements with interspersed text. These are log files with expandable sections. The sections can be nested.
The difficult part is I want the text content that is stored in the HTML file to be compressed with zlib and base64 encoded. It should be decompressed and inserted into the DOM once when each DOM node first becomes visible.
Be terse. Write high quality code with jsdoc type annotations.
It write a couple of hundred lines of code that was not perfect but took 5 minutes to fix. Probably saved me an hour writing it from scratch (I’m not a web dev so I’d have to look things up).
Modern AI (LLMs etc) is definitely a revolution. Anyone that has tried ChatGPT can tell that, just like the only people saying the iPhone was a fad were the ones that hadn’t used it.
The thing that is hyped around AI is companies just trying to shove it into everything, and say stuff uses AI when it is totally inappropriate. That doesn’t mean AI itself is nonsense though. The same thing happened with the iPhone (everything had an app even if it made no sense).
Sooo much inane naysaying in that Rust for Filesystems article. I’m glad there are people with the stamina to push through it.
Part of the problem, Ted Ts’o said, is that there is an effort to get “everyone to switch over to the religion” of Rust
I would say a bigger problem is that there are people that think Rust is some kind of religion with acolytes trying to convert people. Is it really that hard to distinguish genuine revolutions (iPhone, Rust, AI, reusable rockets, etc.) from hyped nonsense (Blockchain/web3, Metaverse, etc.)?
These things are very obvious IMO, especially if you actually try them!
I have yet to see one of these that gives any benefit over ncdu
, which is amazing. I guess if you need to log the output this makes sense but that’s pretty niche.
Haven’t tried Rye but I have used uv
(which Rye uses to replace pip). Pip install time went down from 58s to 7s. Yes really. Python is fucking slow!
Neat, but I’d really like it to just handle memory properly without me having to tweak swap and OOM settings at all. Windows and Mac can do it. Why can’t Linux? I have 32GB of RAM and some more zswap and it still regularly runs out of RAM and hard resets. Meanwhile my 16GB Windows machine from 2012 literally never has problems.
I wonder why there’s such a big difference. I guess Windows doesn’t have over-commit which probably helps apps like browsers know when to kick tabs out of memory (the biggest offender on Linux for me is having lots of tabs open in Firefox), and Windows doesn’t ignore the existence of GUIs like Linux does so maybe it makes better decisions about which processes to move to swap… but it feels like there must be something more?
We use it for triaging test failure (running tens of thousands of tests for CPU design verification).
That use is acceptable because it is purely informational. In general you should avoid regexes at all costs. They’re difficult to read, and easy to get wrong. Generally they are a very big red flag.
Unfortunately they tend to get used where they shouldn’t due to lazy developers not parsing things properly.
Those days never existed. Even the first iPhones were like $500 and that was over a decade ago.
These prices are very high but phones last a lot longer than they used to and are improving a lot slower. I just bought a Pixel 8 for £400 which (accounting for inflation) is about the same price as we used to pay for three old Pixels and even Nexuses.
E.g. the Nexus 4 which was considered “mega cheap” was £279 for the 16GB model, which is £390 in today’s money.
They’re clearly going for price differentiation based on the model year, but you really don’t need the latest model to have an amazing phone any more.