Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Canon Cat (reproof.app)
180 points by maguay on Oct 21, 2022 | hide | past | favorite | 160 comments


The "don't use icons, use words" rule resonates with me. I really dislike android drawing(and other similar) apps that present a cluttered interface with a dozen of cryptic icons.I much prefer words. Also having each widget take a lot more space forces the designer to really think about the value of adding one vs hiding it.

Multilevel menus are IMO definitely better. If the app is a real productivity app people will spend ages in adding an ability to add "shortcuts" to some most often items will provide power users with ability to reach them quickly.


It's almost an irony, given where they're deployed, but icons are an interface which favors the expert over the beginner.

iOS has terrible discoverability, you'd think Apple could afford to release a detailed manual, but once you've figured out what all the icons and gestures mean and do, it's fluid.

A better example is any photo or vector editor you've ever used. They all have tons of icons, all over the place, and if you want to use them, you do need to figure out what they mean. Even with keyboard shortcuts, the selected tool is indicated with the icon.

Programmers are heavily biased towards using words for everything, it's a strength in the context of the profession. The "any text can be selected and run as a command" interfaces are a dead end, but I'm in favor of revival, it's a great match for the kind of work we do.

But if you give an artist a drawing tool with only "Tool→Lasso→Magic Lasso" as an interface, they just won't use it.


iOS has terrible discoverability

Absolutely. I really don't understand how people think that making controls invisible is in any way useful, or user-friendly. To me, the worst is "We just showed you this big screen of options. But we're not going to show you that you can search the options if you drag the screen downward to reveal a search bar. But you wouldn't think to do that anyway, because pulling down on a screen is the Refresh function in other places."

you'd think Apple could afford to release a detailed manual

It does: https://books.apple.com/book/id6443146864

but once you've figured out what all the icons and gestures mean and do, it's fluid

Yep. How to do something in iOS n is not how to do something in iOS n+1.

By the time you read through Apple's 74 page manual, you have a new version of iOS and have to start over again.

I'd rather spend time with my cat and my family than re-learn every piece of tech from every giant tech company over and over again.


It's Apple-think. I showed someone I know (who used to work at Apple) my SailfishOS phone, and this was before iOS added gestures to manage apps. SailfishOS had those "pull down to minimize" and so on gestures before iOS, and when he tried them out he said they were terrible UX because they were hard to discover.

As soon as iOS added them, they were the future of mobile UIs.


The whole flicking 'cards' to manage apps thing was from WebOS.


Flicking cards was the absolute best part of webos. It was irrationally entertaining.


What made it great was the Palm Pre’s size made it so ergonomic to hold the phone one handed and manipulate the cards with the thumb.


> It does: https://books.apple.com/book/id6443146864

I wish I was joking: my first iPhone was a 4, and you're telling me this for the first time.

Did I mention that discoverability is terrible? I'm sure that URL was printed in tiny gray text on that tiny piece of paper you always throw away first...

On the bright side, I'm going to read it. Thanks.


> Did I mention that discoverability is terrible?

Apple does also include the Tips app, with notifications for it turned on by default. In previous iterations if you read all the tips for your new phone, it would point you to the iPhone User Guide; now it’s just at the bottom if you scroll all the way down and you don’t have to read it in Apple Books anymore.

iPhones have a lot of new features every year, so I find it helpful to go through the Tips app at least once every new OS release and also every phone purchase and I usually learn a couple of new things by doing so.


I'm sure that URL was printed in tiny gray text on that tiny piece of paper you always throw away first

You are correct. That's exactly where it is.

Or maybe was. Because I don't remember seeing it this time with the latest phone. Maybe it's not even there now.

Maybe it's in the Tips app now.

It used to be available as a PDF, but I could only find it in Apple Books.


>iOS has terrible discoverability,

I’ve used it since v1 and still don’t know how to do some things reliably.

Lately some weird new copy and paste bar appears, no idea how I’m triggering it. I mean if it works better than the OG awful copy paste menu I’d be all for it but how the heck is it even appearing.

Don’t get me started on the camera app, can never figure out how to get the flash setting how I want it. It tries to be smart but stops me just setting it as I want. Need apple to trust me when I know long exposure won’t be enough


> Lately some weird new copy and paste bar appears, no idea how I’m triggering it.

iOS 13 added a whole set of gestures that all key off of three-fingered interactions with the screen. Just tapping with three fingers shows that bar, which is probably all you're managing to do accidentally.

The rest: three-fingered pinch-closed to copy ("picking up"); three-fingered pinch-open to paste ("putting down"); three-finger swiping left/right for undo/redo... and three-finger double-tap for an undo shortcut.

Totally agree this is undiscoverable, but it's also fairly well gated behind a single root gesture so once you understand the trigger it's memorable.


Wow thanks. Yeah no idea that was a thing. Think I remember seeing the bar during WWDC and assumed it replaced the existing one.

A cheat sheet or a tutorial like the OG Mac mouse tutorial would go a long way.


Every year or two, I figure I really should read up on what new features/changes Apple has added to iOS (or iPadOS) that I don't know about. (And, because I so rarely use, I still never know how to use the split screen stuff in iPadOS.)


Split screen is easy. Tap drag from top middle down and to a side. App takes up less than all of screen if app supports it. Anchor it to a side. Tap another app. Done. Not intuitive at all, though.


Right. That's sort of my point though. When I haven't used it in 4 months, I forget and can't figure it out without looking it up.


Its so stupid, because on a mac for touchpad gestures they have these nice demonstration videos built in. They have what could be a great model for this documentation already.


iPad OS frequently pulls up a file list that I am not sure where it came from that is extremely hard to dismiss (no close box and obvious gestures are mockingly not effective). I eventually get rid of it and forget how. The whole process makes me feel like an 80 year old facing DOS.


Menu’s are beginner level, icons are for intermediates and keyboard shortcuts are for experts. All are necessary.

Tutorials are for onboarding and getting the beginners started.


Yeah I would love to see references to studies that attempt to categorize these disparate use cases you’ve described. On the one hand, there does seem to be a clear demarcation between an interface that’s facilitating information creation and retrieval versus ones that facilitate more artistic endeavors such as photo and video editing. On the other hand, you have use cases like Autocad which seem to blur the lines.


Read the book “About Face : on Interaction Design” , you are describing different “postures” of application. It’s a must read if you want to stop guessing about what interface fits where. It’s my bible as it were.


> iOS has terrible discoverability

I think you mean: a touch interface makes discoverability especially difficult, and iOS doesn't do enough to address this.


In my experience artists using complex software actually prefer text labels to an endless sea of icons. High-end visual effects software tends to use text buttons.

Twenty years ago, when there still was competition in high-end 3D packages, before Autodesk bought everybody and monopolized the market, there was a race between Maya and Softimage XSI. The latter was generally praised for its artist-friendly UI which used text labels while Maya was icon-heavy and seen more as a technically oriented tool.


> But if you give an artist a drawing tool with only "Tool→Lasso→Magic Lasso" as an interface, they just won't use it.

I just can't fail to remember how every architect absolutely loves the AutoCad CLI.


Words in interfaces are terrible unless a lot of thought goes into localization.

For example, consider an American UI designer creating a 'Save' button. Now, translate the button to Dutch ('Opslaan').

What you'll see very often is that the button is only wide enough for the English text ('Save'), so 'Opslaan' gets rendered as something like 'Ops...', which is obviously a terrible experience.

I just set my computer and phone interfaces to English to sidestep this problem, but not everyone wants to do this.


On any Apple platform, that would be considered a serious bug. It shouldn’t ever be a problem in SwiftUI, and you can achieve the necessary dynamism without too much trouble by using layout constraints in both AppKit and UIKit.


Apple's own products do this often. For instance, an instrument in one of my tracks in iOS GarageBand is listed as "Kind...ass". It's supposed to be "Kindergarten Bass".

https://i.redd.it/0ncp6hgad7v91.jpg


It's been a decade since I did any app development on MacOS. But I do recall having to create a separate dialog box XIB resource for each language, with different sized buttons.


That's not needed anymore when using auto-layout, introduced in 10.7 if I remember correctly.


Why does the word “save” need to be in a button? The claim that natural language is an inferior interface is up for debate.


I'm pretty confident that "save" is only one of many examples where this issue can happen.

>The claim that natural language is an inferior interface is up for debate.

They only said that it requires much more thought, not that the debate is finished with their comment.


> The "don't use icons, use words" rule resonates with me. I really dislike android drawing(and other similar) apps that present a cluttered interface with a dozen of cryptic icons.I much prefer words.

Yeah. I've used a Mac for the last few years at work, and I think the Windows task bar is unambiguously superior to the dock. A big part of that is how it integrates text, which just seem to take far less cognitive load to process. With the dock, I frequently have to pause for a couple of seconds and try to remember which inscrutably-styled circle I need to find to open the app I want.

Though a dock interface would probably be better if each app picked a distinct object to use as its icon, rather than the design-collapse we've had where all icons are either some circle or rounded square (often with a circle inside).


I find your interpretation of the dock interesting, especially because I've never used the dock.

  - I don't use it to launch applications, I use launchpad for that.
  - I don't use it to open files in a given app, I right-click and use the contextual menu for that.
  - I don't use it to delete files, I use command-delete for that.
  - I don't use it to get to downloads, I use the Finder for that.
  - I don't use it to switch applications, I use command-tab for that.
  - I don't know what else it's good for, or what I do instead.
I say this as someone who has used Mac OS X since it launched: the dock is useless to me; I have it hidden, and if there were a way to permanently kill it, I absolutely would.


To "disable" the Dock — not really: it will pop up sometimes, e.g. when some icon bounces, but anyway — you can

    defaults write com.apple.Dock autohide -bool true
    defaults write com.apple.Dock autohide-delay -float 100
    killall Dock
Here 100 is how many seconds you will have to keep the cursor on the edge of the screen before the Dock appears. You could set it to 3 or 5 should you need the Dock sometimes but don't want to trigger it by accident.

[Edit: Autohide can be toggled on/off also with ⌥+⌘+D. You can use the combination to show/hide the Dock, paired with a high autohide delay.]


Thanks! Yep, I used to set the delay to a long time, but I gave it up because my post above is a slight lie -- when something pops up in the dock to notify me, I go to the dock to see what the heck it was. That one use case caused me to give up setting a long delay because of how f'ing annoying it is for something to pop up, and then need (I think I set it to) 5 seconds to figure out what it was.


Same, I make it quite small, put it on the right, and autohide it, and it still annoys me.


  > design-collapse we've had where all icons are either some circle or rounded square (often with a circle inside).
ive noticed this myself, now that almost all icons after bigsur have the same shape it takes me a lot longer to find anything in the dock / applications folder

this is really frustrating and disappointing to see things like this deteriorate like this and nobody seems able to stop it...


> A big part of that is how it integrates text

Interestingly, I just looked at my Win11 taskbar. The only text there is "type here to search".

I have a KDE taskbar at the same computer. It currently has a similar number of open windows, all with enough text to understand not only what application they are, but also what I'm doing there. (Except for emacs. Emacs could use some better title texts.)


> Interestingly, I just looked at my Win11 taskbar. The only text there is "type here to search".

I try as hard as I can to keep something as close to a Win 95-style taskbar as possible. I don't use Win11, I know I can do it on Win10.


I tried "combine only when full" mode for a while in Windows 7 and 10, thinking it would be the best of both worlds, but quickly realized that

1. usually I have too much open + the title captions aren't good enough for the tiny snippet that fits to be useful

2. keeping my pinned apps in predictable places is more important than saving a click occasionally

3. if I really want to see all the open windows with their titles, alt-tab or windows+tab works better for this

So I don't miss it on Windows 11, although I don't like the default centered taskbar (sacrificing (2) to move stuff around all the time for no good reason) and change this setting.


Combine only when full worked great on Win7. But Win10 was so wasteful with the startbar space that it became useless.


Have you looked at the Acme editor?

https://en.wikipedia.org/wiki/Acme_(text_editor)


Which was inspired by Oberon, which had an entirely text-centric UI, with tiled windows and controlled with a mouse... and no command line.


I really dislike android drawing(and other similar) apps that present a cluttered interface with a dozen of cryptic icons.I much prefer words.

Or at least give me a choice.

I like how many macOS programs let you have both the icon and the name of the icon under it in the toolbar. It's especially helpful when I'm new to a program and learning its functions.

But opening something like Photoshop or Affinity Photo or Illustrator that I only use maybe once every few months, and it's all hieroglyphics.

And Photoshop makes it worse by piling multiple functions into one icon. So flood fill and gradient are the same icon, and box and circle are the same icon; so the tooltip is useless.


Oh, no, Adobe does it even worse than just a tooltip.

Rich tooltips! Big fancy full-color tooltips! That still don’t really show you how the feature works, and also doesn’t explain there are more features under that icon menu. That obscure huge chunks of the icons next to them, and are patronizing and annoying to seasoned users.

https://www.photoshopessentials.com/basics/rich-tool-tips-ph...


And to be fair, the 1984 Macintosh relied on text far more heavily than icons. The trash can even said “Trash” underneath it!

What Jobs wanted to build was indeed radically different from Raskin’s vision. But many of the underlying ideas and principles (“No modes!”) were preserved to valuable effect.


Icons reinforced with text are amazing, because people can learn what the icon is, and then you can reference without text at times when there isn't enough room.


“No modes” was Larry Tesler, not Raskin.


I know, but it’s a principle that Raskin endorsed & you also see in the Cat.


And if it's not cryptic icons it's undiscoverable actions or swipes... How come all the lessons of UI design that were known in the 90s got so completely forgotten?

I wonder how the standard is at these sort of software shops. If the SV dev/designer can navigate through the app then it's good to go? I don't know...


But users can grow accustomed to what an icon means over time right? It happened with the floppy disk icon, the notification bell, open folder, etc. If you start it out as [icon][description] for the first few iterations wouldn't the user eventually learn to associate the icon for the action?


But users can grow accustomed to what an icon means over time right?

Only if you use the same program all the time, and only if that program never changes.

I use Adobe Illustrator a couple of times a year. There's no way I'm going to remember how to do very much from the ten minutes I used it in March to the next time I need it in October. And by then, the process is likely to have changed because the program got auto-updated by the Almighty Cloud™.


I can only learn colorful icons. B&W ones I cannot recognize even after a long time. E.g. right now I'm looking at my opera sidebar and cannot quickly decide which icon is history. Much easier to open O-menu and navigate from there, which is what I always do to erase that last hour. Same for downloads - I know where they are at the top-right corner and I click "Show more" there instead. Can't find it in sidebar without thinking twice.

Btw, I have no trouble using Paint.NET. I just made a screenshot of its UI and turned B&W. Icons instantly became less discernible, and that still with correct shades of grey. If they were this modern outline-abstract bullshit, I couldn't use it at all and would look for an alternative.


That only works if icons remain consistent and reasonably detailed over long periods of time. Nowadays all icons are abstract, monochrome shapes. They are similar enough to each other, and vary enough from one application to another, that one application's "back" or "new" buttons can look pretty similar to another's "undo" or "copy" buttons. With most apps on my phone I have no idea what the buttons do unless I press them, even if I'm perfectly familiar with the function.


For better or worse, the web (and now all the various portable devices we carry around) introduced users to all manner of user interfaces. I think we're a little more flexible now. Or are we a little haggard?


The fewer words you use, the easier it is to localize for other languages, which I suspect is why a lot of things are like this. And a lot of things that don't need localization are probably designed by people who saw all the ones that were created that way and copied them without understanding.


Yeah and don't get me started on "swipe up/down for volume" (VLC on iPad).


The exmh E-mail client frontend used this: https://rand-mh.sourceforge.io/book/exmh/thexmdi.html


Multilevel menus, like drawers in a workshop, hide tools.

At least keep the square, block plane and pencil within easy reach.


Icons are used for a very good reason; us humans have a remarkable visual memory. Also we can distinguish icons by appearance faster than by words. Also words are language specific, icons... it's more complex...

I can't provide evidence of this ATM but I did do a degree in a related subject.

(I'm ignoring significant visual impairment here)


I once designed an app with a lovely iconic interface.

The customers hated it.

The next release had a lot more text.

For one thing, icons tend not to translate well. We're best off using ISO icons[0], but designers hate them, and always insist on reinventing the wheel.

[0] https://www.iso.org/obp/ui/#home


> We're best off using ISO icons

Hm, first time seeing this and not getting any good results. Here are the results for "save" https://i.imgur.com/UC4H3jE.png.

Are you sure any of these are meant for a technical context?


If you enter "data" instead, you'll get the full context of the only one of those four which is applicable.

I could quibble with some details of how these icons are designed, but I like that read and write point in the same directions as the carets in Unix redirects.

That leaves up for "make data leave this device and go elsewhere", with down for "make a local copy of this data", aka upload and download.

I must confess, however, if I saw the ISO icons in the wild, I wouldn't understand them. This is while already being aware of their existence.


The first one is "to indicate the entered data is saved" and it's been registered since 2004. I think it's more in the context of forms and less about saving a modified file.

Never seen it before either.


icon #6177 was bewildering. Apparently it is for "use on equipment" in the "eltrotechnical" domain. Without too much of a stretch of imagination, I can see a graph on an oscilloscope there, but why for 'save' and not for 'print' or selecting the time domain in a multi-function device?


Yeah, their browser sucks.

I have a few collections that I downloaded, that have software-relevant ones.


> The customers hated it.

Sigh. You screwed up. You gave the users what you wanted to give, not what they wanted. You must always test & iterate.

> The next release had a lot more text.

Yes, I was too general - different interface needs must get different interfaces. Sometimes icons aren't appropriate, sometimes icons + text is better, sometimes you have to get creative.

> For one thing, icons tend not to translate well

I said that.

> We're best off using ISO icons[0], but designers hate them, and always insist on reinventing the wheel.

That's a flaw in the designers, not the icons then. Are you blaming the icons for that failure?

Edit: those icons don't seem to have anything to do with standard computer desktop UIs AFAICS, am I missing somethign?


I don't understand the hostility. I simply told a story about one of the many "bad judgment" stories that I have encountered, and the lessons learned, therein.

I wasn't asking for judgment.


Was intended as blunt, not hostile (how does it come across hostile?).

WRT 'giving the users what you want not what they want', that's a classic mistake I've made several times and finally learnt from. Hopefully you can get there faster than I did.


I did, many years ago. The story is from about 1993.


> Icons are used for a very good reason; us humans have a remarkable visual memory.

That's why since some 10 years they change it regularly.


Jef Raskin's The Humane Interface is a very good read. I didn't find myself agreeing with many of its proposed solutions to HCI problems, but it does an incredibly good job of identifying the issues that we still have to this day with user interfaces.


Agree, it's an eye-opener. Must-read for everyone working with design or UX.


Every time the Canon Cat comes up, I'm surprised no-one mentions the Amstrad PCW family and Locoscript. It was launched two years before the Canon Cat, and not only made it to market but had roughly the same market penetration in the UK as lightbulbs.

Given that a lot of Amstrad stuff wasn't exactly known for being the highest quality the PCW8256 and 8512 were surprisingly good for the £400 they cost (about a grand in today's money, roughly half the price of the Canon Cat). You got a fairly chunky computer about the size of a 14" TV with a green screen monitor which had a not-ridiculous persistence so no flickering and a fairly "gentle" colour. The 3" disks were a bit weird. The keyboard was a nicely clicky "spring over membrane" design that felt nearly as good as a proper mechanical one (not a patch on a Model M but better than most!) and had a bunch of buttons for commonly-used functions. If you pressed any of the modifier keys the menu at the top of the screen would change to show you what you could do. It even came with a dot-matrix printer that could do graphics after a fashion.

You could buy a posher version with a white screen and a daisywheel printer, too, but they were more expensive and the printer was extremely slow and noisy.

It was only slightly harder to get started with than a biro and a notepad.

I wish Locoscript had won instead of MS Word.


Yes and no.

In terms of the function, yes. I have 2 PCWs and had another when they were new. I really like them.

But the PCW was a clever application of old technology, using some then-fairly-recent but far from state-of-the-art tech to enhance some obsolete tech to make a minimum viable tool.

It was a beefed-up CP/M machine, with a big RAMdisk, a hires graphics display, and a (very good) word processor as the primary app. The best word processor on any 8-bit ever, IMHO.

But it was a lot more flexible and extensible than any of the other (many!) competing dedicated word processors of the time, because it had a big screen (using an old-tech CRT instead of a tiny grey LCD), and lots of storage for its obsolete OS, and a proper OS and so on.

(All things that Amstrad failed to include in the PCW 10 and PCW 16, and so they flopped.)

It was inspired to realise that rather than a crappy LCD, you could include a really good mono CRT. Rather than a crappy 16-bit chip, a really good 8-bit one. Rather than a crappy old disk, a much tougher but obscure new one, and make money on the media. Rather than a printer port and driver hell, a dedicated printer mechanism with no brains. Integrate it all so there isn't even a ROM.

The Canon Cat was an inspired attempt to reinvent computing by getting away from the poor minicomputer-inspired UI conventions of 8-bit computers, by taking the basic hardware design and reinventing the entire software stack and UI from the ground up, using no existing tech other than a super-lightweight programming language. No OS, no apps, no concept of them. Barely even legacy minicomputer concepts like "files", and no "binaries" or "executables" or "programs" or "drive letters" of any of that 1960s/1970s historical baggage.

It was a profoundly different take on the same core function. Any technological resemblance is almost a coincidence.


I somewhat agree, but I don't see that Locoscript was really exposing much of the underlying OS to the user.

It booted straight into Locoscript, with no CP/M-like functionality exposed. The application was booted directly by the bootloader which as you say was loaded itself by the not-really-ROM in the bloody great ASIC.

People just put the "Start of Day" disk in the drive, powered it up, waited five seconds, and then picked the thing they'd worked on last with a single keystroke. Now I have to admit, I have never used a Canon Cat - I'm surprised I haven't even run across an emulation - but the videos I've seen of them in use didn't look all that different.


As I said, LocoScript is about the best WP I saw on any 8-bit. I don't want to denigrate it.

But I suspect it achieved that because of 2 things:

[1] It was written by 2 experienced developers, already veterans of the Data Recall Diamond system, as referenced here: https://www.theregister.com/Print/2015/09/09/joyce_turns_30/

[2] And because it came along late enough to be influenced by the new wave of 16-bit computers and GUIs and the combination of those and how they affected word processors.

LocoScript didn't expose any CP/M functionality because it didn't run on CP/M. It used the disk structure, and that meant it could use CP/M User Areas as a single-layer directory structure, which was inspired. TBH for non-technical users, 8 flat folders & multiple disks is enough, I suspect.

As for the Cat, yes, it was profoundly different, from the demo videos I've seen. Martin Wichary got lucky finding a cheap one. I've never seen a working one. :-(


I bet there's some bloody ASIC in a Canon Cat that's not in any meaningful way documented and renders it uncloneable and hard to emulate.


Not at all. Other posts in this thread have linked to emulators. There's one on the Internet Archive, which runs in your browser.

https://archive.org/details/canoncat


It's funny that many of the principles like "Auto-save changes", "Restart where you left off", "Make commands accessible everywhere", "Use words instead of icons" are qualities I enjoy on Emacs, and qualities that derived from the Lisp machines also.

The only conflicting point is "Never allow customization", which I guess is where typical user needs diverges from expert user. Everything else seems to be universal of good UIs.


What customizations do you want to make? I made a few macros in FORTH, but that's probably not what you're talking about. Plus, those macros were stored on floppies, not nvram or a hard disk, so you had to remember which macros you put on which disk.

But yeah, you can't change the desktop image on the Cat (cause there is no desktop.) And you can't change window border width (cause there are no windows.) It's a very focused interface.


Yeah Cat vs. Mac is just a proxy for Lisp Machine vs Smalltalk.


I'd be happy if you didn't impugn Smalltalk so, as it was a marvelously customizable system.


Didn’t the Cat use Forth?


It may have. I was speaking from a UI perspective.


I remember reading about Raskin's work on Ward's wiki, back before Ward and crew made it unusable. I wrote some pretty heated diatribes against it, which I'm not going to do now, but I will push back on one of his dogmas:

> Never allow customization: Consistency, though, led Raskin’s perhaps most controversial idea, prompted by the trouble he saw customers have with documentation. “Customizations are software design changes that are not reflected in the documentation,” and as a documentarian, this could not stand. The designer knows best—something that comes through strongest in Apple’s products—and “allowing the user to change the interface design often results in choices that are not optimal, because the user will, usually, not be a knowledgeable interface designer,” said Raskin. “Time spent in learning and operating the personalization features is time mostly wasted from the task at hand.” Better a consistent, well-designed interface than one you could fiddle with forever.

This is a meta-dogma, a dogma about being dogmatic about your design. Never allow the user to change your Holy Vision, because Your Beneficent Self, The Designer (Peace Be Upon You), has decreed it shall be such, such it shall always be, yea, unto the ends of the system's profitability, never allowing the user to grow in their knowledge of how to do their tasks, never allowing the user to bring their own domain knowledge to their tasks. There shalt always be an unbridgeable gulf between Designer (insert holy trump here) and user, and the user shall never trammel the Designer's Roarkian Vision. So mote it be, amen.

It's High Modernism in software. It's the exaltation of One True Vision above the people who do the work and might, therefore, know something about how the work is done. It is, in other words, utterly shocking Jobs rejected the Canon Cat and its immense hubris. Probably because it wasn't Jobs' immense hubris.


Its so ironic reading this, considering most of the “conveniences” of modern ios are clones of some popular tweak on cydia.


IMVHO all "big of IT" have done their best to castrate users ability to use a computer instead of being used by it like a piece of a machine.

Computing is power, if users get such power they improve, becoming less easy to milk and steer. That's why we see a war against the desktop concept from Xerox, initially by IBM and then by all others.


Linux and emacs might be massively powerful in the hands of a competent, experienced computer user sitting mostly at their desk. For somebody with less time and competence, a tablet can give them much more power because they can use it straight away, in many situations.


GNU/Linux as a bootloader to Emacs can give a competent, patient and experienced computer user an usable enough desktop to get a real damn desktop system to work with, and that sorry state is not due to nature of such system but lack of real development toward a generic target.

Just see mails in Emacs, in a classic desktop word mails would be base64-encoded/makeself-alike archives files to be dropped in the recipient mail server, a FAR simpler thing than IMAP. All the end users have to do is choose an available domain name, with a main address and any alias they like. In the present world nothing stop a dev write a simple Email package in Emacs wrapping fetchmail/maildrop/notmuch to offer a simple config and automatically do the rest, only it's not there because we are too few to use Emacs like that and so there is no interest in such development.

Similarly office guys have FAR MORE interests in present directly and simply in org-mode instead of wasting time in PowerPoint/Impress/*, but they simply do not know org-mode even exists. The few org-mode users have not much reasons in a tremendous education effort who should start from schools instead of wasting public resources to feed some GAFAM surveillance business to teach equally complicated, but limited and limiting tools, to children. Consider that: a modern tablet i NOT at ALL more friendly than Emacs, it's easier only because being far more widespread people already know that with knowledge absorbed a bit at a time as we absorb our mother language in our childhood and when we start learning it formally we already know it more than enough.

Let's say you want to know your spending habit: how a modern tablet on-sale can really be powerful? How on contrary can serve such and so many other purposes well in Emacs/org-mode? In a fictional world how powerful is just grab their transaction from ofx FEEDS (not manual exports) in a local fully-integrated application instead of wasting time on countless of spying crapplications and slow services for doing very limited things with them? It's better for you a computing centered on you or one who use you for someone else profits just living some breadcrumbs to keep yourself engaged? It's better a fully integrated computing or a service-centered one?


You can use Emacs straight away too. Every version I have installed in the last few years uses CUA (Common User Access) key assignments for copying and pasting and a toolbar that makes it easy to open and save documents.


A simple low energy computer that does one task like word processing or act as a terminal client with an ink display would be a useful for focusing on one task without distraction. I personally would love to have a low energy terminal device that I could work in for hours however my pessimism tells me there is not enough demand to scale this.


Those things already exist: https://shop.boox.com/collections/eink-tablet

Associated with the keyboard of your choice.

But also very easy to DIY by plugging an old laptop or a raspberry pi to an e-ink display/monitor and set it up to automatically start a text editor at boot time. Even a raspberry pi zero would handle that task superbly. I am not sure what prevents you to do that except lazyness if this is really something you want.


The problem with DIYing things with a Pi, I find, is that you inevitably end up with a fragile nest of wires and not a device you can rely on not to have been cannibalized (probably by oneself) or unplugged or disassembled to make room for other stuff. Not conducive to something like a bombproof note taking device that's always on standby- and good luck if you want it to be portable or even luggable.


> I am not sure what prevents you to do that except lazyness if this is really something you want.

The problem with a general use machine is you can always alt tab out and procrastinate. If you're the one who set it up, you're the one who can un set it up.


I am pretty sure that as long as you have a phone/laptop at home you are as much ready to put the focused machine aside for what you think will be a few seconds in order to procrastinate.


It does make it a bit harder. Also... NEW TOY.


I use my Newton eMate as a distraction-free writing device.

The keyboard is great, NewtonWorks is a surprisingly powerful office suite and it syncs nicely with the Mac through a serial-to-usb adapter.


For many years, my father used an NEC 8201a [0] for just this purpose - sitting comfortably writing without distraction. He’d later feed the text into his PC for layout.

It also had BASIC, which I used to play with – and at one later point, I even wrote a little program in Visual Basic to simplify the process of transferring text from the NEC to his PC, including getting rid of the many erroneous carriage returns that the process otherwise inserted.

[0] https://www.old-computers.com/museum/computer.asp?st=1&c=334


The closest, today, feels like eInk tablet devices like the Remarkable—though it's built more for sketching and handwriting notes than typing.



Not really. You can't edit.

I don't write on paper because, as someone who's been reading and writing for half a century now, I learned to write for a living on devices where I could edit the text as I go.

And it's so much better than pen and paper, where every mistake is permanent.

I don't want to hand write on my computer. Draw, yes, but not scribble. I want a keyboard and editing: cursor keys, delete in 2 directions, copy and paste.

Freewrite does not do that. It's a typewriter, but digital.


Well blow me, I had no idea it was thusly limited. What a weird decision!


I only found out myself in the owners' group on Facebook.

No cursor keys should have been a giveaway. Later hardware revisions can run a later ROM which adds cursor control using alpha keys, in a way that by the sound of it reminds me of WordStar or Vi in the 1970s.


https://en.wikipedia.org/wiki/AlphaSmart had some cool devices and they’re still usable today.


I still use one plenty. The batteries just lasting forever is such an incredible feature. Plus the Mac layout keyboard.


Yeah I love mine.


Have a look at the old Tandy m100. AA batteries, and you can type for days without distractions on a mechanical keyboard.

Or, pick up any Android based e-ink ereader and plug in a keyboard.


I am interested in starting a Guild based on this practice.


Would the refresh rate not be an issue for this?


The Freewrite has been around for a while.


See my reply above.


If you are interested in Canon Cat these are two good sites:

- Documents: http://www.canoncat.net

- Web based emulator: https://archive.org/details/canoncat


An article about the Canon Cat which doesn't mention that it was programmed in FORTH? Perhaps not essential to the UX, but when claimed "A predictable, documentable system must be entirely under Apple’s control,” FORTH, renown for its extensibility, in fact, complete blurring the lines between language, OS and application, seems a non-obvious choice.


Related:

Leap Technology (1987) [video] - https://news.ycombinator.com/item?id=33137433 - Oct 2022 (40 comments)

The Canon Cat: The Writing Information Appliance (2004) - https://news.ycombinator.com/item?id=30836958 - March 2022 (42 comments)

Demo of the Canon Cat computer released in 1987 with 'leap' feature [video] - https://news.ycombinator.com/item?id=29423545 - Dec 2021 (1 comment)

Canon Cat - https://news.ycombinator.com/item?id=26213934 - Feb 2021 (31 comments)

Leap Technology (keyboard vs. mouse on a Canon Cat machine, ca 1987) - https://news.ycombinator.com/item?id=22042900 - Jan 2020 (1 comment)

Canon Cat Emulation - https://news.ycombinator.com/item?id=18032916 - Sept 2018 (2 comments)

Canon Cat Resources – Jef Raskin's Forth-Powered Word Processing Appliance - https://news.ycombinator.com/item?id=14650365 - June 2017 (23 comments)

The Canon Cat - https://news.ycombinator.com/item?id=6978587 - Dec 2013 (30 comments)

Canon Cat Documents Archive - https://news.ycombinator.com/item?id=3394546 - Dec 2011 (8 comments)

Canon Cat - https://news.ycombinator.com/item?id=595744 - May 2009 (15 comments)


> design the computer to fit the human's needs

The problem: different humans need different things. But Apple nowadays operates as if everyone needs the same things, at least in terms of UI/UX. If your actual needs (or desires) differ from Apple's preconceived notions, you are simply out of luck.


or are holding it wrong!


It's interesting that the Cat came out in 1987, the same year as the Cambridge Z88.

The Z88 cost 250 pounds (about US$400 at the time), which would make it a lot cheaper than the Cat.


The Cambridge Z88 is immediately what jump to mind on reading this for me too.

I still sometimes wonder if I should finally buy one. They were wonderful little machines.

https://en.wikipedia.org/wiki/Cambridge_Z88


I had one of those!

Very cool piece of kit for its time.


Most of the ideas and rules there, like "auto-save changes, restart where you left off, and make commands accessible everywhere" have been a core of the Mac experience since over a decade and a big reason for why I've loved them since jumping ship from Windows.


Kinda sorta, but they came to Mac OS X about 20 years after the Cat.


I can totally understand why the Canon Cat was a flop. At the time it came onto the market, mouse-driven GUIs were largely seen as the thing of the future, and those didn't have to be explained: moving the mouse and clicking was like pointing your finger at the screen and tapping it directly. And then this weird thing with no GUI and no mouse comes along. Its UI may have been easy to use too, but it probably needed a lot of explaining and getting used to before you could internalize it, so only a few power users really made the effort to familiarize themselves with it.


It came out in '87, 3 years before windows 3.0. Macs were still twice as expensive as PCs and accounted for something like 3-5 percent of new computer sales. DOS was the most popular PC OS at the time, and wasn't known for its user friendliness. So the idea of a "turn it on and start typing" machine had some cachet.

I think the near absolute lack of marketing is what doomed the Cat and IA.


Apple 2e and 2c was still popular in 87 and probably sold more than Mac in that year.


The keyboard is intriguing: the two "LEAP" keys before the spacebar, obviously to be used with the thumbs, are not dissimilar to what some ergonomic keyboards are now using. Apparently they're both labelled "LEAP" on the top of the key while on the side it's written "LEAP AGAIN" (that's what I see from googling a few images). On Wikipedia it says these keys are for "incremental string search".


If you press and release the LEAP key, it advances the cursor one character forward (or backwards if you hit the left leap key.)

If you press down (but do not release) the LEAP key you enter a search semi-mode. As you type a search term in this semi-mode, the cursor moves to the first instance of that search term it finds. After moving to the first instance of the search term, you release the leap key to exit the search semi-mode.

If you want to move the cursor to a subsequent instance of the search term, you press (and do not release) the "USE FRONT" key and press the leap key again (whose key front is labeled "Leap Again.")

You can see this in action in this YouTube video, but it happens pretty fast so you have to watch carefully:

https://youtu.be/o_TlE_U_X3c


They're briefly explained later in the article. It sounds as though they function similarly to ? and / in vim.


Closer is eMacs ^s and ^r. You csn type ^stext and it will jump to the next instance of ‘text’. Keep hitting ^s and you continue searching.

Pedantically it’s a keystroke less than / <enter>.

The distinction is that the Leap key is more like a shift key bs how vim and emacs works.


Yup. Jef was big on eliminating modes, so came up with the LEAP "semi-mode". Emacs ctrl-s enters a mode that you use ctrl-g to escape (or at least that's what I use, there are probably many other ways to exit search mode.)

Subsequent ctrl-s presses in emacs is the equivalent of the "USE FRONT" / "LEAP AGAIN" key press.


You're absolutely right! I was weighing up whether to analogise it to Emacs or vi, and in my haste I made the wrong choice.


Is anyone aware of other apps that clone Cannon Cat's leap feature? I've always wanted to implement in my own editors, but it's never quite reached top of the list.

VIM search is similar in some ways, but not sure it's really the same thing. I don't know VIM well, but it seems to be missing many of the little details, anyway would be interested to know about any other apps.


https://en.wikipedia.org/wiki/Archy_(software)#Leaping

> Archy is a software system whose user interface introduced a different approach for interacting with computers with respect to traditional graphical user interfaces. Designed by human-computer interface expert Jef Raskin, it embodies his ideas and established results about human-centered design described in his book The Humane Interface. These ideas include content persistence, modelessness, a nucleus with commands instead of applications, navigation using incremental text search, and a zooming user interface (ZUI).


> The system provides two commands, Leap-forward and Leap-backward, invoked through dedicated keys (meant to be pressed with the thumbs), that move the cursor to the next and prior position that contains the search string. Leaping is performed as a quasimode operation: press the Leap key and, while holding it, type the text that you want to search; finally release the Leap key. This process is intended to habituate the user and turn cursor positioning into a reflex.

> Leaping to document landmarks such as next or previous word, line, page, section, and document amounts to leaping to Space, New line, Page, and Document characters, which are inserted using the Spacebar, Enter, Page and Document keys respectively.



I suspect that Shortcat app recently advertised here was named after it. If not, it is a cool coincidence.


It basically works like C-x C-s in Emacs, but within even easier reach since you're in search mode as long as you hold down the LEAP key, and releasing it puts you back in regular editing mode.


Is there a way to create such mappings in vim? E.g. pressed Shift sends vim to inc search mode and any input goes to search string.


I am currently working on a new 3D system, and have been thinking a lot about GUI design.

My impression is that as applications have grown in complexity, there has not been a corresponding change in our approach to GUI design. I'm talking of desktop apps for doing 3D content creation (Maya, Houdini, Blender, etc).

The menubar and hierarchical menus worked elegantly in the simpler days of the Xerox Star and Apple Macintosh, but the continued reliance on them makes me chuckle at times.

The other day I decided to count the number of GUI elements visible in one such app. There were over 200. I can't prove this, but my feeling is that this visual/usage clutter can create confusion and anxiety in users. Or maybe users just learn to ignore the 90% of widgets they never need to use.

I find it increasingly awkward to have to move the mouse to click inside a 16x16 pixel widget on a 4K screen. Most GUI actions are not inherently graphical (click on a button, menu, icon).

One app has such a large contextual menu (with many submenus) that the menu includes its own search field, and users click to make the menu appear, then type in a few characters to locate the menu item they want, then click on the item. I can't help but shake my head and chuckle.

My own attempts at coming up with something different have resulted in a series of (short) popup menus that can be invoked via keyboard. My hope is that users will develop muscle memory to go to the selection they want quickly. For example, hitting "C,C,C" (the "c" key three times) invokes, in order, the Create, Curves, Circle menus (each menu replaces the previous one) and creates a circle in the 3D scene.

Been planning to make a video demo...


>my feeling is that this visual/usage clutter can create confusion and anxiety in users

Designers have been stomping all over presenting useful information under this assumption for the past 20 years. Speaking for myself, I am sick of the hypersimplification of what should be useful tools. Complex workspaces have complex needs, and that's OK. Doesn't mean we can't find ways to improve, but just hiding things isn't the answer.


I think the onus should be on the UI designers to present features when they are needed. This means making an effort to model the users' workflow, which is not trivial.

A good first step can be seen in Lightroom, where the TAB key will hide the tool palettes, leaving the entire (uncluttered) screen available for examining and selecting/ranking photos. Hit TAB gain to get the palettes back when you are ready to use one of the tools.

Just putting 20 icons along the edge of a window and washing their hands of it seems like a cop out to me.


That’s why I always advocate for formal training and certification when it comes to using complex software. Sitting down for eight hours or 40 hours in a classroom setting allows users to familiarize themselves or master the other 90% of functions they aren’t normally aware of or haven’t used. There is software that needs to be complex because some job functions are complex, the big focus on collaboration in the workplace now results in people with fundamentally different job functions using the same piece of software which clutters up user interfaces as well.


The other day was a story about the CueCat. I always got confused between the CueCat and the Canon Cat!


I'm sure this may be interesting, but with such low contrast it's simply impossible to read.


I understand the mild annoyance but this is a problem easily solved by clicking the "Toggle reader view" button on your browser.


And you could install an extension to get rid of those "subscribe to the newsletter" popups etc.

Or you could let the authors know they're doing things wrong. Applying technical solutions to bad content sources only encourages their existence.

Edit: in this case, the article didn't seem unbearably low contrast to me. I guess mileage may vary with monitor.


> Edit: in this case, the article didn't seem unbearably low contrast to me. I guess mileage may vary with monitor.

Hence my suggestion for a technical solution.

Additionnally the beauty of the web is that for the most part[1] the user has full control over how he want to show the content as the code is interpreted locally. Custom css, your own fonts, which js you execute, your own page treatment. You can do whatever post treatment you want with the content provided.

[1] Lazy loading through javascript makes it a bit less true but it rarely concern opinion/text stuff and for the most part web applications on which data can be also accessed through an api. The rest are dubious social medias that try to steal your life, attention span, privacy and make you an addict for commercial purpose and are best avoided like all hard and dangerous drugs.


> Additionnally the beauty of the web is that for the most part[1] the user has full control over how he want to show the content as the code is interpreted locally. Custom css, your own fonts, which js you execute, your own page treatment. You can do whatever post treatment you want with the content provided.

Your time is free?


I think it boils down to icons + mouse vs text and keyboard. Imagine a mac without susan kares icons that played such a big role in making it feel truly human and developing an emotional attachment. Among so many great UI guidelines is that outlier with a dislike for icons with a minor and solvable (hover + tooltips) reason.


More like Canon Emacs!


For those unfamiliar with the device that comment might be too cryptic (it was for me), but meanwhile I found its FORTH manual [1], where it says:

"Products designed by Information Appliance Inc. (IAI), such as the Canon Cat, have a number of unique features. One of them that directly affects third-party software development is the principle of editor-based software."

[1] http://www.canoncat.net/cat/Cat%20tForth%20Documentation.pdf


Software written in Forth. Here's a video showing its interpreter mode and Forth words https://www.youtube.com/watch?v=XZomNp9TyPY


I take issue with the "forgotten" moniker. I use my cat every other day.


Great article, great machine. Probably the single most important computer of the late 1980s.

I wonder if it would be possible to write an implementation of the Cat UI on top of Emacs?


There is no doubt Raskin had a lot of great ideas - I still wouldn't want to own or use a computer designed by him.


Well, AFAICT, he wasn't building a computer at all, he was building an appliance. Contrast with the "bicycle for the brain" model. I'm sure glad Jobs took over. (My ME/30, Classic, and Classic 2 concurs).

ADD: Jef was designing for a user who knew nothing, Jobs for the user who could grow


It just shows the genius of Steve Jobs. Jeff's Macintosh would be an utterly boring, dumb, and limited machine.


It wasn't designed to be a general purpose computer. It was designed to be a word processor.


No, not from what I've read -- a lot! -- not at all.

That is like saying "a bicycle is designed to be a two-wheeled car, where the user provides the power and balances it." Sounds awful and very impractical.

The Cat was designed as a device for entering, storing and manipulating data, primarily textual and numeric. It was also designed to be extensible, and could send and receive that info, do simple computations on numbers, including sets of them in rows and columns.

It was a lot more than a word processor, and with a very different, and from the sound of it, better UI than any word processor ever.


"Parents" buy PC for children's better grades (misunderstood as education process), read: to throw their responsibility for teaching kids away; and SPhones/Tablets to make kids happy read: avoid them & avoid raising them completely. (instead of creating the best educated friends you never had)

In era of Telly being religious altar & oracle, buying a boring tool without feature of entertainment and non-marketable as colorising life / uplifting status seems incomprehendable.

If, only boring tools were available children having other options would still not use them without pressure / good faithed convincing / bribery, so only these without other alternatives would use it.

Extreme majority is not teached to teach & convince/evangelise. What they know is forced through obligatory schooling or coerced by popular media reeducation disguised as entertainment & socialisation.

It conditioned minds and culture to prefer a sweet poisoned fruit or fool's gold over juicy beef steak earned through hard & uncomfortable. And it's even marketed as free willed choice & freedom..

Families teaching & training kids can fit in a statistical margin error. Not counting ideological training though.. which vary between better or even worse, usually just mirroring the present & future telling magic window.

Digressing

Being a loner born under communist occupation as kid dumped on with an old german-languaged windows-like "pc" & printer, with only some black-white architect-work apps, i can say from experience:

God damn all foreign languages (Babel tower curse..); Bless universal language of self-explainatory images & idiotproof systems with one choice one action principle, learning by exploring & building own features from simple blocks on fly.

Words, icons, images are descriptors & ideograms. If we count numbers as pictures then words are made of pictures too - just with predefined abstract meaning behind each composition.

We can map, plot and precisely group common words and their meanings & relationships on machine learning multidimensional graphs. So you can also use these algorithms nowadays called AI to find each word's & meaning's image/icon approximation counterpart. It would be weird if nobody yet tried it and geberelise it for most common worldwide languages. It would create the only truly existing allhuman Lingua Franca. ∆∆∆

Image wins by representing direct visual & intuitive least abstract meaning. Like icons with human gestures. Like emotions.

It's direct representation of real distinct thing you keep in visual memory along with all other instances of this idea/class & it's common relations to other things.

But if you don't give a fully separate choice you will never know which symbolic system is better in a fair fight.

Just the border between both is blurred & has no codifier.

In time universal self-explanatory meanings will mutate adding new popular abstractions & both will merge.

Like me eventually learning the Mordor Lang words on the clicked magic boxes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: