I poked around on an old Indy at a vintage computer show a couple years back, and the main takeaway I had was, “holy crap the UI elements feel instantaneous.”
I know it’s been posted here many times about how computers have become perceptually slow, but that Indy after a couple minutes of poking around really drove the point home in a way that no numbers ever could.
Computers have gained a lot, for sure, but they’ve also lost a lot. I wonder if it’s even possible to make a modern computer fast in a way that feels fast again.
It certainly is. Most of the reason modern computers don't feel instantaneous is actually a trade-off: old computers were less adaptive to change.
In old GUIs (e.g. Windows 3.1), many things—file associations, program launchers, etc.—got loaded from disk into memory once—usually at GUI startup—and then the state of those things was maintained entirely in memory, with programs that updated the on-disk state of those things either 1. also independently updating the in-memory state with a command sent to the relevant state-database-keeper; or 2. requiring that you log out and back in to see changes.
Today, we don't have everything sitting around loaded into memory—but in exchange, we have soft-realtime canonicity, where the things you see in the GUI reflect the way things are, rather than a snapshot of the way things were plus (voluntary, possibly missable/skippable) updates. Install a program that has higher-than-default-binding file associations? The files in your file manager will update their icons and launch actions, without the program needing to do anything.
There's ways to eat your cake and have it too—to have on-disk canonicity and instant updates—but this require a very finicky programming model†, so we haven't seen any GUI toolkit offer this, let alone one of the major OSes do so.
† Essentially, you'd need to turn your Desktop Environment into a monolithic CQRS/ES aggregate, where programs change the DE's state by sending it commands, which it reacts to by changing in-memory state (the in-memory aggregate), and then persists a log of the events resulting from those commands as the canonical on-disk state (with other aggregates fed from those to build OLAPable indices / domain-state snapshots for fast reload.) This gets you "Smalltalk windowing semantics, but on a Unix filesystem substrate rather than a VM memory-image substrate."
While you might be slightly right, my experience tuning windows machines leads me to believe your missing the mark.
I'm going to say the three largest contributions to general desktop lag are:
Animations and intentional delays. It can't be said, how much faster a machine feels when something like MenuShowDelay is decreased to 0, or the piles of animations are sped up.
Too many layers between the application draw commands and the actual display. All this compositing, vsync, and minimal 2d acceleration creates a low level persistent lag. Disabling aero on a win7 machine does wonders to its responsiveness. But even then pre-vista much of the win32 GDI/drawing API was basically implemented in hardware on the GPU. If you get an old 2d win32 API benchmark, you will notice that modern machines don't tend to fare well in raw API call performance. 30 seconds poking around on youtube, should find you a bunch of comparisons like this https://www.youtube.com/watch?time_continue=25&v=ay-gqx18UTM.... Keep in mind that even in 2020 pretty much every application on the machine is still relying on GDI (same as linux apps relying on xlib).
Input+processing lag, USB is polling with a fairly slow poll interval rate (think a hundred or so ms). Combined with the fact that the keystrokes/events then end up queued/scheduled through multiple subsystems before eventually finding their way to the correct window, and then having to again reschedule and get the application to retrieve and process it via GetMessage()/etc. Basically, this is more a function of modern software bloat, where all those layers of correct architecture add more overhead than the old school get the ps2 interrupt, post a message to the active window queue, schedule the process managing the window messages. (https://social.technet.microsoft.com/Forums/windows/en-US/b1...)
There are a number of other issues, but you can retune those three areas to some extent, and the results are a pretty noticeable improvement. Having someone at MS/etc go in and actually focus on fixing this might have a massive effect with little effort. But that doesn't appear to be in the cards, since they appear to be more interested in telemetry and personal assistants.
Sure. Mind you, my argument wasn't comparing Windows to these sorts of "lightning-fast" systems—(modern) Windows isn't even a contender. Nor is macOS, nor KDE or GNOME.
My points were under the mode of thought where you look at a modern system already aimed at being "fast because it's lightweight" (e.g. XFCE), and then you ask why it still feels laggy compared to BeOS/AmigaOS/etc.
Where such "lightweight" DEs do have perceivable latency, that latency mostly comes down to operations hitting the disk where these older systems didn't. (Also the input stack, yes, but that's not universal: modern hardware still has PS/2 ports, and modern OSes still access those with rather direct reads. Many gamers swear by PS/2 peripherals—though probably mostly for cargo-cult reasons.)
Some of the minimal-est Linux WMs, e.g. Fluxbox, are run entirely from memory once started, and so are comparably fast—but need an explicit restart/reload to do anything. Plus, the apps launched in the WM aren't designed with the same paradigm, so most of your experience there is still slow.
BTW: One of the larger contributors to desktop lag on linux continues to be the lack of integration between the scheduler and the WM. The idea that demand paging is causing a lot of general latency doesn't ring true to me. Maybe on initial startup, but once the machine get started most OS aren't doing actual disk io to satisfy random user interactions. Things like the windows sendTo list, which is actually a bunch of links in a directory (unlike most of the rest of the shell which tends to be registry based) end up cached in ram and don't actually result in disk IO (and you should delete entries you don't use). You might argue all the user->kernel crossings build these lists on the fly are a problem, but frankly unless you have a incredibly long lists of file associations, etc a few hundred API calls don't contribute to the overall lag much.
On linux (which always seem to trail on the responsiveness metrics) a lightweight DE can help, but scheduler tuning makes a even bigger difference. Just switching the power profile to performance is far more noticeable on linux than most other OSs. If your not aware of the work of Con Kolivas you should read up on the history there. Everything is a lot better than it was 15 years ago, but a number of the core problems really haven't been solved.
To expand on your point, one of my favourite examples of how slow a lot of software has become: A few years back I tested load time for a default Ubuntu emacs install in a console against "booting" the Linux-hosted version of AROS (an AmigaOS reimplementation) with a customized startup-sequence (AmigaOS boot script) that started FrexxEd (an AmigaOS editor co-written by the author of Curl).
AROS goes through the full AmigaOS-style boot, registering devices, and everything.
It handily beat my Emacs startup.
Now, you can make Emacs start quicker, and FrexxEd is not by default as capable (but it's scriptable with a C-like scripting language), but there's no wonder people feel software has gotten slower, because so often the defaults assume we're prepared to wait.
E.g. one of my pet peeves with typical Emacs installations: Try misconfiguring your DNS and watch it hang until the DNS lookups fail.... It's not an inherent flaw in Emacs; you can certainly prevent it from happening, but so many systems have Emacs installations where you face a long wait. Normally of course this is not a big issue, but those setups still have a DNS lookup in the critical path for startup that adds yet one more little delay.
All of these things add up very quickly. In the cases where these things are an issue in older software, they tend to either need to be explicitly enabled, or there is concurrency.
I submitted some patches to AROS years ago, to implement scrollback buffers and cut and paste in the terminal, and one of the things it really brought back is how cautious AmigaOS was in making everything painstakingly concurrent all over the place. At the cost of throughput but cutting apparent responsiveness.
E.g. when you cut and paste from a console window on AmigaOS, data about the copied region gets passed to a daemon that will write it to a clips: device. It gets passed to a separate daemon because the clips: device that the clibboards are stored to, like everything else in AmigaOS can be "assigned" to another location. By default it is stored in T: (temporary). By default T: points to a RAM disk. However, clips: or T: could very well have been reassigned to MyClipboardFloppy:. In which case copying would prompt you to insert the floppy labeled MyClipboardFloppy: after which writing the copied section would take way too long. (That actual write to the floppy would happen in yet another task)
So copying as a high level system service is handled in a separate task (thread).
Everywhere throughout the system everything that could ever potentially be slow, and on a 7.16MHz 68k machine with floppies and limited RAM that was a lot of things, would be done in separate tasks so that at least the user could just get on with other things in the meantime. "Other things" then as a consequence often meant "just keep using the current application" because the concurrency would mean a lot of these things would lazily happen in the background.
For e.g. a typical shell session, just pressing a key will involve half a dozen tasks or so, e.g. gradually "cooking" the input from a raw keyboard event, to an event for a specific window, to a event to a specific console device attached to a window, to an event to a high level "console handler" that handles complex events such as e.g. auto-complete, to the shell itself. It is inefficient in terms of throughput, but because the system will preempt high priority tasks (including user input related ones), and react with low latency and offload less latency critical tasks to other tasks, the system feels fast.
A key to this was that on a system that slow, a lot of the things that could be slow, would regularly be slow for the developers and would get fixed so that you'd opt in to the slow behaviour. E.g. I doubt most people using Emacs are ever aware if their installation is slowed down by DNS lookups on startup for example, because most of the time it's fast enough that you won't really notice that one extra little papercut.
> Everywhere throughout the system everything that could ever potentially be slow, and on a 7.16MHz 68k machine with floppies and limited RAM that was a lot of things, would be done in separate tasks so that at least the user could just get on with other things in the meantime.
BeOS also has this design. Modern programming languages and platforms do make async and event-based programming a lot more intuitive, so we might well see this paradigm make a comeback.
I guess it depends also how developers get forced upon them.
Windows NT family branch has had asynchronous and event based support since the early days, being heavily multi-threaded, yet not many cared to use it properly.
To the point that Microsoft pushed for an async only world with WinRT(UWP) and it got mixed up responses. Now with Project Reunion going forward it remains to be seen how async will faire.
However at least .NET, C++20, Rust and JS now have full asynchronous support, as the main Windows desktop stacks.
On Android, Google only had AsyncTask, with multiple caveats how to use it properly, then came the fashion with RxJava, Java executors, now it seems Kotlin co-routines are the future, assuming a #KotlinFirst world, with C++20 on the NDK.
On the Apple side we have GCD still as the main workhorse.
As for the other OSes, I guess they are pretty much still the same as they always have been, so that leaves language runtimes for better async and event-based programming.
Win32 UI libs however were heavily optimized asynchronous systems based on callbacks from the OS, where you avoided storing bitmaps up-front or anything like that, except as optional caching.
This is what enabled fast drawing despite very primitive drawing system (essentially GDI would get you a pointer to a window of VRAM, the origin of various fun graphical bugs people remember from windows, like dragging broken dialog boxes around that leave a "trace").
WinNT OTOH, has realized async support across the whole I/O stack, essentially finishing the never-finished concurrent QIO of Digital's VMS (VMS afaik to this day haven't got fully working concurrent QIO - the API is there, but if you actually enable concurrent operation too many applications die), and has a bunch of undocumented async mechanisms as well (including mostly free-form asynchronous calls from kernel to user - again a VMS invention - which are in many ways similar to POSIX signals except you aren't limited to a small table of events to attach handlers to, and they support concurrency by default)
Event driven is only comparable if you explicitly makes the events go on a queue and allows preemption.
To get this right, you need to actually have the code executed with actual preemptive multitasking, and test it with random substantial delays in processing of messages.
I’ve been thinking of this quite a lot, and I beg everybody’s pardon if I now proceed to veer off topic, but...
I’ve been thinking about the landmark announcement by Apple that surprised absolutely nobody by informing its developers and the wider world that it will be switching to “Apple Silicon”, a yet-to-be-filled placeholder for a brandname-to-be. (The only thing it almost guarantees is that officially they won’t be any reference to ‘ARM’.)
So, famously Gil Amelio quipped that he had chosen to buy NeXT “rather than Plan Be” when, panicked by declining sales and hampered by an obsolete OS, he plonked down a sizeable amount of an on-the-verge-of-bankruptcy Apple to buy a working OS to succeed their decidedly musty old eighties tech; something they hadn’t been able to develop internally. NeXTStep had excellent developer tools, a close tie to graphic design (by way of its Display PostScript), and rather Frankenstein-ish UNIX core, though I’d hesitate to call it a “beating heart”. The core OS, the kernel, and so forth, were not the main strong point and were not great performers. It’s what stood above that had value and that has, apparently, driven and undergirded much of what Apple did since then and until now.
And yet, just as they finally abandon the charade of macOS X being “Mac Os Ten” and move forward to “macOS 11”, I can’t help but think that the path they have traced themselves will lead them to need to reimplement much of their core technology in a manner that is more akin to how BeOS’s “pervasive multithreading" was conceived almost thirty years ago. What do our devices provide us with now? Real-time media streams and lag-free interactivity whilst connected to a network and running on a fairly modest platform. When I first heard Be’s motto “One Processor Per Person Is Not Enough!” I was bewitched (to the point that my teenage-self me insisted obstinately that his next PC be a dual-processor, to experience the exotic thrill of it). But now it’s normal.
Apple’s switching to its own silicon, which probably means larger grids of the cores it already has shown are so incredibly overpowered for the likes of the iPad Pro (so much so that what every iPad Pro over has secretly suspected—that their machine could comfortably run macOS and a bunch of demanding applications—has been confirmed). I heard somewhere that the A12Z that powers the current-generation iPad Pros (and now coincidentally is pulling very honourable double duty as the SoC in the Developer Transition Kit) is a 4W part, and that the current MacBook Air is apparently designed for a 16W thermal capacity, therefore we can expect something along the lines of “four times as many of whatever will go in the A14”, and that’s a fair first-order approximation. So like with the AMD case, we’re going to have a lot of cores.
They’re going to need the BeOS ethos to master all those cores. And it’ll be interesting to watch, because our collective success with GPUs and some embarrassingly-parallelisable non-graphics corollaries have given us a collective false sense of security of having somehow ‘mastered’ parallelism. Ultimately at root our systems are built on mechanisms and assumptions that only scale favourably so far. Maybe it’s time to go back and have a good look at what those folks did in the early nineties when they delivered realtime multimedia streams on interactive devices with an absolute minimum of lag and did it all with what now we’d consider a pittance of resources.
> Animations and intentional delays. It can't be said, how much faster a machine feels when something like MenuShowDelay is decreased to 0, or the piles of animations are sped up.
These animations effectively increase the input lag significantly. Even with them turned off there are extra frames of lag between a click and the updated widget fully rendering.
(Everything below refers to a 60 Hz display)
For example, opening a combo-box in Windows 10 with animations disabled takes two frames; the first frame draws just the shadow, the next frame the finished open box. With animations enabled, it seems to depend on the number of items, but generally around 15 frames. That's effectively a quarter second of extra input lag.
A menu fading in takes about ~12 frames (0.2 seconds), but at least you can interact with it partially faded in.
Animated windows? That'll be another 20 frame delay, a third of a second. Without animations you're down to six, again with some half-drawn weirdness where the empty window appears in one frame and is filled in the next. (So if you noticed pop-ups looking slightly weird in Windows, that's why).
I assume these two-frame redraws are due to Windows Widgets / GDI and DWM not being synchronized at all, much like the broken redraws you can get on X11 with a compositor.
> USB is polling with a fairly slow poll interval rate (think a hundred or so ms).
The lowest polling rate typically used by HID input devices is 125 Hz (bInterval=8), while gaming hardware usually defaults to 500 or 1000 Hz (bInterval=2 or 1). Most input devices aren't that major a cause of input lag, although curiously a number of even new products implement debouncing incorrectly, which adds 5-10 ms; rather unfortunate.
> For example, opening a combo-box in Windows 10 with animations disabled takes two frames; the first frame draws just the shadow, the next frame the finished open box. With animations enabled, it seems to depend on the number of items, but generally around 15 frames. That's effectively a quarter second of extra input lag.
This isn't usually what I think of when I think of "latency." Latency is, to me, the time between when the user inputs, and when the system recognizes the action.
This becomes especially problematic in situations where events get queued up, and then the extra latency causes an event to attach to something that is now in a different state than the user perceived it to be when they did the input—e.g. double-clicking on an item in a window you're closing right after telling the system to close the window, where you saw the window as open, but your event's processing was delayed until after the window finished closing, such that now you've "actually" clicked on something that was, at the time, behind the window.
On the other hand, the type of latency you're talking about—between when the system recognizes input, and when it finishes displaying output—seems much less troublesome to me.
We're not playing competitive FPS games here. Nobody's trying to read-and-click things as fast as possible, lest something horrible happen.
And even if they were, the "reading" part of reading-and-clicking needs to be considered. Can people read fast enough that shaving off a quarter-second of display time benefits them?
And, more crucially, does cutting that animation time actually cause users to be able to read the text faster? Naively you'd assume it would; but remember that users have to move their eyes to align with the text, to start reading it. If the animated version more quickly "snaps" the user's eyes to the text than the non-animated version, then in theory the user using the animated combo-box might actually be able to select an option faster!
(And remember, none of this matters for users who are acting on reflex; without the kind of recognition latency I mentioned above, the view controller for the combo-box will be instantly responsive to e.g. keyboard input, even while the combo-box's view is still animating into existence. Users who already know what they want, don't need to see the options in order to select them. In fact, such users won't usually bother to open the combo-box at all, instead just tabbing into it and then typing a text-prefix to select an option.)
The "latency caused incorrect handling" is IMO the worst thing in all the "modern desktop is slow" complaints.
I can deal with 9 seconds latency, if I can mentally precompute the expected path taken and the results match it - this happens just by being familiar with what you're doing, and can be compared to using Vi in edit mode with complex commands.
I can't deal with 400ms lag if the result is that a different action than the one I wanted is executed.
USB is polling at 1000hz now if the devices support it, it's really not bad. On the hardware side, displays are really the part that still needs some work in terms of latency, and FreeSync and G-Sync are evidence that the problems are at least being considered. The hardware in the PC itself is pretty good.
On the software side, operating systems, desktop environments, GUI SDKs, frameworks, and so on could all take the problem a lot more seriously, but I wouldn't hold my breath waiting for that. There are too many people involved who believe it's reasonable to pause and play a little animation before doing what the user asked for.
I don't know if this still works, but nvidia had a "program settings" override, where you could select a .exe and force vsync on for particular programs. Might try messing with that. I did that for media player classic a long time ago.
The thing is, you will always have tearing if you disable vsync (and compositing) for the sake of low latency. But tearing is only really perceivable when watching full-screen videos and animations, which is quite a different use case from general computer use.
Not entirely sure your analysis passes muster. While it is true pigs like chrome and slack are the norm, computers still have many orders of magnitude more memory than 20 years ago and even if a large GUI app keeps state on disk most of it should be hot in some kind of cache whether an explicit cache or just the OS page cache.
There are other forces at work.
For instance on my primary workstation after a period of time from cold boot, pretty much everything is sitting comfortably in 32 GB of memory. A lot of time is more due to the
1) sheer volume of crap that has to be shuttled in memory, because memory accesses still aren’t free
2) there’s a surprising amount of repetitive CPU intensive tasks just to get a program started up on merely bookkeeping tasks like linking, conversion, parsing.
In addition to all the built in latencies already mentioned.
To prove this you can setup a system using a RAM disk and notice that responsiveness is often still subpar.
I strongly suspect there are issues related specifically in how not just the OS, but also various applications, handle input events.
Special example for me is Chrome, which has a huge singleton in the form of Browser Process that can make your experience a hell even if all other chrome processes are idling - simply because it's the central hub that handles among other things all of Input handling and UI.
This compounds with other latencies, where often I can start building patterns of "if a windows containing that kind of content is on, expect macOS WindowServer to start lagging insanely despite being under low load)
Just yesterday I started a VM of beta2 of HaikuOS for a good dose of nostalgia. Some observations: the responsiveness of the UI is instantaneous. So much so that it feels alien nowadays. Also the UI does not waste space, but at the same time it does not feel cluttered. Laying out windows feels more organized than Windows 10 or recent macOS. I can't really point out why. Lastly, having dedicated window borders is so useful for aiming the mouse pointer when reszing.
At some point our UIs became more a design than a utility.
I worked with an Indy for some time and what was really cool - besides the smooth and fast reaction to user input you mentioned - was that the UI elements were scalable. If I remember correctly they were vector and not bitmaps. Back then I thought that this will soon be the norm. I'd never imagined that we would still be struggling with this more than 20 years later.
EDIT: After googling SGI workstation models I think what I used was actually most likely an O2. Great design, I remembered its distinct look even after so many years.
There was a revelation a while back that instantaneous UX can be detrimental. If the action occurs so quickly that the users can't see it happening, they have a tendency to assume it didn't happen. Programmers had to introduce intentional latency through elements such as the file copy animation.
I disagree. What you describe is a problem of interaction design founded on bad assumptions; with good interaction design I don't have to show the user that the computer is doing something for the user to be able to tell it happened. This is a problem of the system not showing its state transparently and relying on the user to notice a change in hidden state indicated by a transient window.
Windows Explorer gets your particular example right: When you copy a bunch of files into a folder, it will highlight all of the copied files after it is done, so it doesn't matter if you saw the progress bar or not.
Having seen all this from the inside of more offices than I can remember, I would say definitively yes. That programmers did have to introduce delays for that, and for other reasons.
However, this was a trust thing, and not an innate limitation of human capacity.
Now that we’re in 2020, I would argue that almost 0 users need that crutch.
If the work is done, it’s done. Similarly, employees on early keyboard-only terminals didn’t need any delay either. They trusted that when they said do X, the system did it. How else could you process dozens (or hundreds!) of items a minute?
That was indeed a while back—we had fewer ways to communicate change back then. Given all the fancy technologies in a modern laptop/mobile device, I wonder how much more we could tell the user without slowing down to give visual feedback—instead communicating by haptic and/or 3D-audio feedback after the fact (the same way that e.g. macOS plays a sound effect after-the-fact when you put something in the Trash—but with a procedurally-synthesized soundscape, rather than the static "it went to the bottom-right.")
Oooo. So I think this is a terrible idea for most people, but something I would personally love. I'm a big fan of 3 monitors in a slight "wrapping" formation, but there are times when some event occurs on a screen which is literally behind me if I'm looking at one of the far monitors. It would be so cool to have 3D audio feedback that lets you calibrate to the layout of your monitors.
Back in 1998/1999 I worked in a mom-and-pop ISP shop with 64 dialup lines. We had an Indy that ran our mail server, website, databases, FTP and web hosting, LDAP server, etc etc. And it was still somewhat usable as a desktop as well.
(straying slightly off-topic) I recently had the same experience with an Apple eMac. Grab a window, shake it around as fast as you can, or just move the mouse cursor around in circles quickly.
"Whoa, why does this seem so much more responsive than my modern machines?" Two things, I think: 1) the mouse cursor is being updated with every vertical refresh (80Hz), and 2) the latency of the CRT must be lower than an LCD.
CRTs are "dumb" devices, they literally just amplify the R/G/B analog signal while deflecting a beam using electromagnets according to some timing signals. As far as input lag goes, they're the baseline. For fast motion they have some advantages at leat over poor LCD screens as well, since non-strobing LCDs quite literally crossfade under constant backlight between the current image and the new image; we perceive this crossfading as additional blurring. A strobing LCD on the other hand shifts the new image into the pixel array and lets the pixels transition while the backlight is turned off. The obvious problem - it's flickering.
LCDs that aren't optimized for low latency will generally just buffer a full frame before displaying it, coupled with a slow panel these will typically have 25-35 ms of input lag at 60 Hz. LCDs meant for gaming offer something called "immediate mode" or similar, where the controller buffers just a few lines or so, which makes the processing delay irrelevant (<1 ms). The image is effectively streamed through the LCD controller directly into the pixel array.
There was a certain level of optimization, though arguably the graphic stack was much more complex than GDI, so yes, Windows 9x with GDI accelerator would have more performant interface for basic work.
Xsgi included some interesting optimizations, starting with how it was a compositing X server and a feature that is known to break GTK3 all the time - Xsgi would provide a low color visual as the first choice, and this was reflected in the provided Motif and other libraries and exploited by software developed for Irix - the latter mostly because you could not depend on the end user having a 24bit display, though.
This means that the bandwidth requirements across all stages of drawing was reduced for common UI components, instead of slinging around high resolution 32bit RGBA bitmaps for everything as is the norm today on Linux. I haven't checked, but I strongly suspect that the more ascetic UI controls combined with classic X11 drawing calls also resulted in higher speed vs. slinging lots and lots of bitmaps with overdraw using XRender.
For more on that, consider checking the literature written about avoiding overdraw on Android and how much of an impact it had on latency of user interface. iOS, btw, actually did a lot of dirty tricks to essentially prevent developers from doing any overdraw without explicitly going for it, and supposedly a strong part of early review process was checking for overdraw - because the actual iPhone hw was not that powerful and they ran close to the bandwidth budget per frame.
Definitely felt like it to me; NT (4?) was sluggish. It was much better than the other Windows but at our office it was kind of a joke (no-one touched them unless they needed to compile to an .exe) compared to SGI / Sun machines.
I actually have two SGI machines kicking around (an Indigo2 and an Octane2), and I really wish I had a better idea of what to do with them beyond poking around at the desktop for 10 minutes.
One big problem with all these old "workstation" computers, is that while us hobbyists still have our ways of getting the OS installed... The actual applications people ran on them almost seem lost to time. When software is so unbelievably expensive during its heyday, it tends to not make the jump over to "abandonware" repositories once its time has passed. This unfortunately makes demos of these old machines far more boring than demos of old PCs.
I think CATIA V4 (CAD software) was big on SGI - at least that's what I used an Indy for back in the day. I believe V4 didn't run on the PC, so you needed a workstation anyway. Don't know if one can find a copy anywhere but I think it would be fun to use it again. It was solid software with a good UI, quite different from CATIA V5 which (also) ran on the PC and had a very colorful and noisy UI.
EDIT: After googling SGI workstation models I think what I used was actually most likely an O2. Great design, I remembered its distinct look even after so many years.
Heh, I was doing CAD workstation support about 15 years ago, when there was the big switch between CATIA V4 and V5. SGIs were mostly Octanes plus a few bigger irons, I think, but it also ran on HP-UX and some Sun workstations.
There wasn't a lot of movement in the 3D workstation space at the time, whereas PC 3D accelerators were taking off big time. So you ended up with a system that was faster, a lot cheaper and where the regular PC maintenance software and infrastructure could be used (boy, homogenous Unix devops was a nightmare).
Having said that, CATIA was more a workhorse CAD and CAE software, so probably not the best to show off neat UIs, and amongst engineers it had a somewhat ponderous reputation.
I ran similar and built up some pretty big systems.
Some software, made on IRIX, like Alias and I-DEAS did show it all off. Other apps ran great, but did their own thing no matter what UNIX you were on. Then again, some of those interfaces were amazing in their own way. I would count CATIA R5 in that group for sure.
This fits with my experience and I agree that CATIA V4 UI isn't the most exciting. It is solid though and something many people spent 9 to 5, 200+ d/y with, so I thought it could be interesting for octorian to experience what working with these machines really felt like back in the day.
You probably have zero chance of getting your hands on the software, but in the late 90s TV ststions ran their weather graphics off of SGI workstations with some software based in part on something called Inventor. The software was incredibly easy to use for building 3D animations, which were baked to video with built-in loops and pauses for the weather segment.
If your meteorologist didn't want to hold a remote, the weather producer would have to sit in front of the workstation with their hand over the spacebar waiting for the right cues to advance to the next loop/pause point.
If your meteorologist didn't want to hold a remote, the weather producer would have to sit in front of the workstation with their hand over the spacebar waiting for the right cues to advance to the next loop/pause point.
At some stations, it wasn't about the talent not wanting to hold the remote, it was about the station not having an engineer on staff who could rig up the remote.
At many stations in the 90's, and even some today, the "remote" is nothing fancier than a garage door opener, with the relays hooked into a breakout box to a DB25 serial port.
I did a lot of development sitting in front of my IBM 43p with it's Intergraph monitor and model M keyboard (not original). All I needed was to ssh into my Linux machine and run the software over the network. The responsiveness, however, will be mostly gone in this scenario - X doesn't come for free.
These machines are excruciatingly slow by today's standards and I wouldn't want to run modern software on them. Still, they are the dinosaurs of the PC ecosystem - evolutionary dead-ends that hint at what could be. Who wouldn't want to study a living dinosaur?
There was a graphical programming language for SGIs called AVS. I saw impressive scientific applications made with it: elaborate physics simulations with 3D graphics. The language and the applications seem to be completely forgotten.
This page about AVS is dated 1995 and cites a 1989 paper:
Almost all of the software worth preserving is preserved. It may not be openly available (though odds are it will eventually be dumped openly on the net) but you can find pretty much anything you want if you put in some effort. It comes down to you getting access to one of many private repositories that are dedicated to software preservation.
Yes. I regret it a little, but I worked in the industry having access to a lot of big software.
Piled up a bunker of Sgi machines, O2, Octane, Indigo, Indy, most very well equipped with the advanced memory and graphics options.
Alias, I-deas, Maya, Adobe, 3DS Max... Let's just say I could license most of that at will due to an error...
Learned a ton of high end skills that I benefit from today too. Great fun. And amazing demos. Putting those together was a total blast. People would get blown away using Showcase, the Sgi tools for video capture, audio, and Composer to mix, RIP, burn. This was mid 90's when most people were using Win 98, or maybe NT 3.5.1
Let's also say I got rid of said error (so don't ask) and gave the whole lot away to a 20 something me just itching for those same experiences. Those machines were well loved and used. Cool.
I needed a change away from that kind of computing as it went on the wane. Didn't want to look back.
But at the peak? I was very seriously productive on Irix. The Indigo Magic Desktop took everything I ever threw at it.
And one could flat out bury those machines with a heavy workload and still the UX was golden, responsive almost as if idle!
At one point, at some conference, the head scientist at Sgi said, "We turn compute problems into I/O problems."
How the machines performed showed that ethos off well, IMHO.
Honestly, today on say, Win 10, I can do all I did then on a laptop, but not enjoy it as much as I did that environment. It is responsive and fun!
Big software on Irix remains one of the peak computing experiences I have had. Damn good times.
I may have to put this on a Linux install and have some fun.
In my view, the Irix scheduler is insane good at balancing UX with workloads. It may not always be the peak possible throughput, but a skilled user can continue to blast through their tasks pretty much no matter what the OS load is.
On a lark, I got to try an extreme example of that:
Irix 5.3 on an Indigo Elan, 30Mhz CPU. I forget which one. I want to say R3k. (Check Ian's SGI Depot for more info)
I compiled "amp", which is an optimized mp3 player that formed the basis of many players after it was written.
At 30 Mhz, that Indigo Elan could play up to 192Mbps mp3 files, over NFS, while the desktop remained responsive.
At 256Mbps, CPU load was about 95 percent. Would glitch on occasion.
I found that quite impressive personally. I used that little Elan as a X terminal for a while and it was a pleasure to use.
Damn. Say what you want, their stuff was fun, had amazing docs, and got work done.
[Looks at Android / Win 10]
That's part of why I scaled down. Unloaded that gear and went small, embedded retro for my fun computing. The work is easier today, and fine by me, because it is just work.
I hate to say it, but that's even more useless than a vanilla IRIX installation. By far.
One big problem with running a 3rd party OS on these old machines (specifically SGI, Sun doesn't have this issue) is that they tend to not really support any of the hardware that actually makes these machines interesting.
Porting and hacking on low level stuff is probably the most interesting thing I can think of for it. That is probably why I keep the machines around. In practice though I hardly have the time to use them.
Porting software to unexpected environments sometimes exposes legit bugs and validates your design. I always like to get software I write working in unusual configurations. It tests my assumptions and often results in something sturdier.
If you want to find bugs in your code that arise from assumptions about machine architecture, you can use emulators. You don't need an ancient MIPS machine for that, unless you want to find bugs on code that supports long obsolete hardware. And you can emulate architectures far weirder than MIPS with that.
And you should test on slow and unreliable hardware. You'd be surprised, for instance, how frequently servers fail to boot because the BMC ignored your instructions.
>from assumptions about machine architecture, you can use emulators
That is exactly NOT true, since there is no 100% correct emulation around, think of x64 emulation with integrated Meltdown, if you don't know about meltdown your emulation has no implementation for it.
Look you made a perfect counterargument, using unreliable hardware...with your words you could just test it with unreliable emulation ;)
>unreliable hardware... how frequently servers
Wait...server's and unreliable do not match together...and then the BMC argument..sound's bit crazy to me.
I use that machine for what i want to use it, i also use my C64 as a Webserver and my Leemote-Laptop as my Mainbackup.
Better having Hardware you can use for something then not...and you know, you can reinstall Irix later if you want.
Can someone please explain what are the main hypothetical advantages of this desktop environment? Even the article itself does a poor job of explaining this, it seems to assume that the reader already knows what IRIX/whatever is, and why did (do?) people use it over other environments?
As others have said, it's mostly nostalgia, and perhaps an effort to go back to what was a comfortable and sensible working environment from 25-30 years ago. It's the same reason the Haiku operating system is being developed: Nostalgia for an operating system that was far more advanced and beautiful than its contemporaries in the late 90s and early 2000s.
Seeing this question asked makes me sad. If you are browsing this on Linux then you're probably using code descended from Netscape which was developed by a guy called Zawinski on an SGI Indy.
Or if you have seen a movie, then all the special effects are descended from work done on SGI machines.
If you have fuel in your car, the oil field was probably originally found after running seismic analysis on an SGI. They were famous for their movie work but Oil & Gas was actually a far bigger market for them.
If you have flown anywhere, the pilot was trained on a simulator descended from technology developed by SGI.
If you program C++ then STL comes from SGI. As does the XFS filesystem.
If you have seen a weather forecast on TV, that probably was done on a Cray, who for a while were owned by SGI and who developed a lot of their tech. Many TV stations used SGIs to render the forecast on the green screen behind the presenter too.
If you have played Nintendo, the MIPS processor in it was developed by SGI. If you play games on PC, OpenGL was developed by SGI.
They were once one of the most influential computing companies in the world. Now they are merely a legend, and a fading one at that :-(
Been a while and I had nearly forgotten! It used to be that when you googled an STL container, SGI documentation came up first. This was occasionally annoying as there were a few things that were slightly different from what was standardized. If memory serves, they had a better hash table before the standard did...
I bet a lot of the interest is from people like me who grew up with home computers in the 80s and 90s.
In the old days Silicon Graphics had this almost mythical reputation. As a kid with a home computer I used to daydream about one day getting my hands on an SGI, what kind of games I'd be able to make on it.
The computer scene was more fragmented back then. Graphics Workstations sat apart from home computers, and the ordinary Amiga or C64 user would only read about them in magazines or see them featured in TV shows about movie special effects.
If you liked computer graphics, these TV shows and articles were like glimpses ten years into the future. Because of this, SGI became a byword for "amazingly cool computer with powerful, exotic graphics hardware that I will never get my hands on" Many of us hoped that one day we'd be playing video games on an SGI home computer or vr deck.
In Pelevin's Generation «П», published in 1999, all the politicians and other celebrities are not real people, but rendered for TV on SGI computers, complete with some comments on events that require more computing power to render and thus "happen" rarely or only in richer countries. No other brand (even an entirely made-up one) would look as natural in this myth.
IRIX 4Dwm was like the Rolls Royce of desktops. It didn't get in the way of what you were trying to do. This was in a time when you only ran one application on your SGI machine, e.g. 3D modelling. Compared to Windows, Sun and other desktops it was just nicer to use. The responsiveness, the cool fonts and the 3D bits with no anti-virus pop ups or other nonsense made it a joy to use. The icons were Susan Kate originals done in vectors before SVG was a thing.
Things like the CPU performance monitor were a joy to see and haven't been improved on. Same goes for the screen savers, not that we have them now.
There was nothing clunky about the magical desktop and normally your applications were tools that cost tens of thousands. Mere mortals thought they were seeing the future and what a brilliant future that was. The web was going to be VRML 3D in those days but we ended up with 2D bloat.
The tools you have shape how you think and SGI just made your thinking creative and imaginative. Everyone else was on Windows which had great apps like MS Office but an SGI machine had none of those civilian programs.
IMHO Ubuntu has a desktop that falls short of what SGI had even though it is an environment that doesn't distract in the way that Windows does.
How I see it is that you have consumer operating systems such as Windows, ChromeOS and OSX and you have professional operating systems such as what classic UNIX machines had. Linux desktops are in this professional mould.
Yes it did, and yes it could, although I never saw it run on anything else - or if I did i didn't recognise it...
AFAIU the atari version was a little more capable and a bit more fun than the other versions, due to Apple suing DRI and crippling later versions to comply with Apple's demands, yet somehow Atari was excluded from those demands since they bought rights to the source code for further development so continued with the original concept.
You can definitely put together a GEM on top of FreeDOS stack.
I got some janky 486 laptops for note-taking in university and swapped between that and OS/2 3.0 with its rudimentary pack-in wordprocessor. PC/GEOS would have been even better, but was not readily available at thwe time, as I think it was still a commercial product targeted towards schools using hand-me-down PCs.
At my first software development job in the mid 90s, the "cool" basement room with all of the smart/weird developers in it had a a mix of Sun SPARCstation 10/20 and Sun Ultra 1 workstations.
There was also this one weird SGI O2 they had just bought to port their software to the IRIX platform, but noone wanted really wanted to use it, because of IRIX. So I picked that workstation, just to be in that room. Smartest decision of my life - what I learned in there defined my career.
The Irix Interactive Desktop (based on Motif) felt so incredibly responsive on the O2, compared to Motif/CDE on the Sun workstations. It was almost BeOS-like in that regard. It was the little touches that mattered. A random example: the CPU usage monitor updated at like 10-20 Hz, instead of like 1 Hz on the Sun workstations.
IRIX was my favourite UNIX flavour in the 1990s. As a result, I was tempted to try MaXX out after reading this post for purely nostalgic reasons. I keep an SGI Fuel in my basement (running IRIX 6.5.30 and everything from Nekochan) for when I need a nostalgia kick.
However, after thinking about whether I'd actually use MaXX over GNOME for a few minutes, I decided that there was no compelling reason to do so other than nostalgia.
Has anyone tried this out and decided to use it as their daily desktop? If so, I'm interested to hear your experiences and rationale. Cheers!
Never used IRIX, but I genuinely really like the icon set in that first screenshot. Also, SGI Screen is still one of my favorite fixed width fonts.
Also I think FVWM was heavily inspired from this, and I’ve used that as a very efficient UI over VNC (gradients, images, rounded corners, etc. are expensive in terms of bandwidth. The minimalist aesthetic of this interface conveys a lot of structure in little data).
I had an internship at an IBM reseller and consultancy couple years back. When we were at a client’s after finishing a project, they showed me their basement full of decommissioned computers and they let me take one home for free. I picked an SGI Indigo2, purple version, running IRIX. I had no idea what to do with it, so I played around for a couple of months and then made the mistake of throwing it away. (That still bugs me). I still love the interface and the way things were organized in the system, the file manager in particular. It ran nedit as editor and I used that on Linux for a couple of years going on. Nostalgia...
edit: Remembering, I got the machine with original SGI keyboard, mouse and screen.
I had a purple one and a green one! I’d play Doom (which didn’t use the 3D hardware), played around with a few of the OpenGL demos, barely surged the internet with the ancient version of Netscape, then didn’t do much else because I didn’t have the CD’s that came with it or the knowledge to try and build anything else.
From reading the comments it seems this effort is more on the nostalgic sentiment for reliving the IRIX desktop.
I'm more interested on reliving the CEDAR/Tioga interactive desktop environment pioneered by the Xerox R&D team back in 1980s/1990s for their in-house productivity tools . The system still have some productivity enabling features that are still not widely used as of now. Xerox also managed to port the system to SunOS at the time.
Anyone aware of any effort or clone that can enable CEDAR/Tioga to run on or emulated Linux?
I remember a prof who bought a purple SRI workstation that didnt have enough RAM, he plugged into the Ethernet, couldn't get work done with it, left it plugged in anyway.
I found out two years later that the root password was the empty string.
Then there was the time I went to Syracuse for the first conference on Java for scientific computing and Geoff Fox had two identical twins from eastern Europe to run a demo on two workstations hooked up to a big SGI computer unsuccessfully which Geoff answer with 'never buy a gigabyte of cheap RAM'.
I used CDE more than Irix (so I can't speak too much to Irix), and generally preferred it for its clean look. And that's the reason I'd use it for what I call Rebasing. Similar to bullet journaling, you find a clean slate that helps bring you to some sense of organization and the big picture again.
Also similar to bullet journaling, it doesn't take a modern DE to do so. An old one with a usable text editor will do fine.
In fact the clean lines, sensible design, and visible pixels give some of us a mental cue toward a simpler, gentler productivity reset. I find that my main DE is much less compelling in that way, modern though it may be.
Around 2000 or so Dell screwed up the delivery of a couple desktops right after I was hired. I ended up using an O2 to read my email, read documentation and to get up to speed with our tech stacks. I used it for about a month. Very nice machine, exceedingly responsive.
The website has instructions for building on FreeBSD, but how does that square with the license agreement that "permits The MaXX Desktop to be deployed and executed ONLY on the following Linux platforms; x86, x86-64 and IA-64"?