Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Johnny can’t build a decent user interface (jeffreyellis.org)
95 points by cwan on April 19, 2011 | hide | past | favorite | 39 comments


I completely disagree with the assertion that interface design is an "inborn" trait. Most people suck at it because they have not spent the time learning how to design a good interface (the whole 10,000 hours thing), usually because it isn't a priority or interest for them.


Here are some qualities I've noticed in great UX/UI designers:

- Deeply understands the technology involved and how people want to use that technology

- Looks at UI the way users do and is disgusted by pain points (ala Steve Jobs)

- Empathizes with users who use technology differently than they do (or anyone else on the team), and is able to build out UI specifically for their use cases

- Wrangles the development team to do everything possible to make life easier for the user; set-up that auto-detects your rig, Mint's on-boarding process where all you have to do is enter your username and password from your bank website log-in, etc.

- Concepts and implements things like emotional hooks, look-and-feel, embedded tutorial, etc.

These things are "seen" and not "done" like playing the piano or other activities which require 10k hours to master. Some people have excellent product experience vision and give themselves permission to use it in a business context. Other designers will push pixels their whole careers.

Some people think deeply about why they do things, others don't. I think that's the critical skill in UX design and cannot be trained.


Thinking deeply about why you do things a particular way is critical for being good at anything. You can replace "UX design" with "programming," "writing," or even "salsa dancing."


Speaking from experience, becoming a "good" UX designer has definitely taken me at least 5,000 hours already. In that time, I did plenty - specifically, I made a whole bunch of UIs that, for one reason or another, sucked.

In time, the more designs I made, the more I started to develop an intuition for things that worked or didn't work. Not to mention that before I learned how to design, I had to learn how to listen to my users - and before I could do that, I needed to learn to check my ego at the door and realize that even as the "designer", it didn't mean I knew a darn thing about what my design should look like until I talked to them (and sometimes, not designing what I originally "wanted" to design as a result). Yes, there are definitely skills involved in "doing" things that are hard to train, and there's a sense of "natural" talent - but with enough effort, even a computer science major like myself can pick up those skills.

And conversely, the hours you spend doing design are hours that are necessarily taken away from other things you want to be an expert on. Like NoSQL databases, or sleep. :)


"usually because it isn't a priority or interest for them" is the real sticking point. In order to make a really good UI you have to make sacrifices. A lot of them. All the time. Your time, for one thing. Most of your good ideas, the ones that you really liked...will end up not quite working, and need to be abandoned. Especially the really cool ones. That back-end that you wrote so elegantly? Well there's a critical use-case that requires you to write in a section of special-cased inelegance. Or, even worse, in developing the UI you realize that it really, really needs this feature...which will unfortunately require you to redesign your backend architecture.

Most engineers who set out to "make the UI" aren't willing to make those sacrifices, so you get a crappy UI. That's not meant to insult engineers, because they have plenty of other things to worry about. But unless you take it truly, painfully seriously, all you get is shit.


Yeah I've always been tweaked by the "inborn" trait thing. I think there's a natural affinity that people can have for things, but at the same time, anybody who puts in the hours can get there with this stuff. Yes, some people "get" how to play a piano right off, and they're naturally really good. For many people who are really good, though, they put in tons of practice hours honing their "piano qi". My 2 cents, at least.


It usually doesn't make sense to invest 10K hours in something that you rarely do. And if you want to do something as a professional , it usually make sense to choose a field in which your innate talents could be useful , just because jobs are competitive , and people with better innate talents could improve much faster.

So in real life, capable professionals usually have some innate talents. But it doesn't have to be specifically ux design talent. maybe a talent in fast learning , and grasping complex stuff could be used. maybe a deep natural empathy you have with users.


I agree, but I have found that many people here hold the complimentary view: that programming ability is an inborn trait.


I wonder if that is due to the nature of how many of us got into programming, i.e. spending hundreds-to-thousands of hours in front of computer.

I've been "programming" in one sense or another since I was about eight years old. Anything that you have been doing that long since that early is going to seem "inborn" if you don't think about it hard enough.

Certainly, some people are better at certain things than others (just watch me throw a ball), but that can be overcome with hard work (IMHO).


> hold the view... that programming ability is an inborn trait

I recently wrote someone a recommendation where I argued that programming languages and patterns are readily teachable, but holding a complex model in the mind and communicating it effectively to peers, laymen, or computers (the computer is the ultimate simpleton) is less easily taught.


I agree, but what you're saying is that teaching teaching is hard. And I think it is, for reasons I brought up here: http://news.ycombinator.com/item?id=2360570


Heh, I suppose I was being too polite. I'm not sure holding that mental model in the head can be taught. I think it's a key driver of excellence in innovation and insight across a variety of industries, not just software development, and whether nature or nurture, seems to be formed very early--through avid reading or story telling, perhaps?

I agree with your linked comment about teaching. One university teacher told me that to lead someone to your point of view, you must first put your arm around them making you start the discussion from where you can see theirs.


The idea of an "inborn" trait in this context seems to me as a person who was able to learn something without really knowing how they learned it, which prevents them from being able to effectively teach it. Hence the inborn trait as an explanation of the magical process of learning how to do something.


I think the original intention of "Why Johnny Can't ___" provide some items the environment around him is doing wrong and some action items on what those in a position the change the environment and action items for Johnny. This article, OTOH, offers only criticisms of Johnny with no real action items for how we (all of us are Johnny in something) can correct some of this abhorrent behavior. I for one would like more advice from the designers (UI / UX)at HN covering how I, as a low level software dev, can start improving my design abilities. I don't want to replace a designer but I want to get better at explaining my needs to a designer and maybe be able to help come up with the look and feel of my 'vision'.


Sorry, I thought I did provide some advice, not just criticism - specifically the last 6 paragraphs of my post.


I dunno.. your article only has 2 links - 1) 'Websites that suck', 2) your own article about how 'incompetent individuals tend to overestimate their abilities'.

The last 6 paragraphs are fairly obvious points where you do not go into detail or give examples. How is the reader supposed to know what you mean in a way that is useful? Some links to examples or how to implement would be useful advice.

Also, I'm curious, why the snarkiness? Do you consider being critical of others to be part of your online persona? Is it for entertainment value? Is it an effort to improve something? I don't mean to pick on you - I'm actually curious.


I was honestly not trying to be snarky. Can you point out where I was? Seriously, I'm not trying to get defensive here - it would help me if you could point it out.


Perhaps snarky was not the right word - the article contains little/no sarcasm. I think myself and others are reacting to the general feeling of a largely negative criticism without any focus on good design done by engineers (or anyone).

Words like 'mad skillz' imply condescension towards some hypothetical group of developers. Words like 'self-serving', 'lack of', 'incompetence', etc are just negative - again targeting a straw-man group of developers.

Within your article there seems to be a paradigm of a) stuff that 'sucks', and b) the 'right' way (your way).

So, it leaves me thinking that you are targeting a negative generalization towards a group of developers. Effectively, you created an ad hominem argument against a straw-man. This comes off as negative, but more so it tells me as a reader that you spent energy evaluating the scenario from the standpoint of 'this is wrong', without fully understanding the point of view of your straw-man or pointing to examples of good design.

Instead of conveying 'these guys over here suck, they should do it my way' you could have conveyed 'these guys over here can suck, but when I have observed them doing x, y, z they definitely suck less' - same message without the ad hominem. Or, even better 'these guys over here are good at design vs average engineers because they do x, y, z' - same message with an emphasis on positive examples, not negative examples. That would convey that you have taken the energy to evaluate both the negative and the positive about the straw-man developer group you appear to be attacking.

As a general aside, I have been trying to understand why many smart / skilled people bias towards viewing others in a negative light - ie, does it have a benefit? I am generally interested to understand where this negatively comes from when skilled people evaluate their peers.


Thanks for the feedback. Negative criticism, I was certainly engaging in. I'm not sure that I'm attacking a "straw-man group of developers," though -- I certainly never meant to convey, for example, that all developers are bad at user interface design -- but I did say "most" so perhaps I was painting with too broad a brush. And agreed that I could have provided concrete examples -- perhaps a future post.


"I thought I did provide some advice, not just criticism"

Yes, but as the OP points out it's advice for Johnny, as opposed to a more general indictment of society.


You tell the reader the very real/important fact that the mental model we use to solve the problem in code isn't the same metal model the user will have, even though the programmer is so accustomed to that model it will certainly heavily influence any UI they would design. Other than encouraging the reader to understand the use cases and use that as a starting point for an interface, you don't really explore where to go from there. It is a valuable starting point, but isn't much.


Thanks for the feedback -- perhaps a follow-up post with more detail is in order (or a list of resources).


Well, if you ever do end up writing that follow-up post, I thought I'd share a game that I play with my brother, that I have found to help me enormously with interface design.

Some background - I'm a software engineer, with my true love being application design, though I have often worked in other fields. My brother is an architect, so he has had quite a bit of formal schooling in design technique, and well, he just enjoys doing it. Anyway, when we get together, we have a game we like to play where we pick an average every day device, and we try to design a better user interface for it.

There seems to be a very constant sequence of actions that we do when playing the game. The most important of all is to start playing the game - which is to say if you aren't trying to improve an interface, you never will. But once we start trying to improve an objects interface, we generally start by identifying flaws in the current design, the pain points. Once those have been found, we start tossing around ideas that might fix the flaw. It's at his stage that something interesting starts to happen - you get a deeper understanding of the problem the object was trying to solve, and that understanding frees you up to try something really different - the old smartphone to iPhone type of jump.

To give an example, the last time that we played the game, we were working on traffic lights. Some of the flaws that we found included:

Light bulbs that blow

Lights get lost in the background of dense cities such as Paris or London

The posts take up valuable street space, making them a hazard for pedestrians and dangerous if a vehicle loses control.

Inflexibility to changing conditions - for example, when I'm the only driver on the street for the last 10mins, why can I get stopped at lights for a minute waiting for it to go green?

Once the flaws are identified you can start tossing around solutions. In the case of the traffic lights, we started to understand that the object itself was the problem. Traffic lights are a solution that was designed before the advent of ubiquitous computers and wireless communications. These days you could design a system where your car receives a signal from any intersection saying whether it has tonstop or not. The traffic light itself can be moved inside the car, or even tied to the control system if you are about to run a red light. That would be a good first step, no more poles, no more distraction from blinking neon signwork. But ther would still be the problem of maintenance, the radios could blow. And it still doesn't stop you from getting stopped at an empty intersection.

Our next iteration required a deeper understanding of what traffic lights are actually trying to do, rather than just understand what they are doing. They are trying to control traffic flow. So, the final solution to traffic lights is to find a better traffic flow solution. In our game we eventually decided that ai cars, capable of driving themselves and connecting to a regional traffic control server, would be able to negotiate their passage at each intersection. No more traffic lights anywhere in the system, because at the end of the day they are a hack that was created as a stopgap solution at a time when we didn't have the technology to do any better, and which now continues on due to inertia.

Anyway, if you keep playing the game, you eventually get to the point where you really start to understand design as being an attempt to get to the heart of the problem you are trying to solve. This process helps you find the original solution to the problem that makes your object a plaesure to use rather than being something that you have to fight against


He forgot two reasons interfaces go wrong:

1. Giving users what they ask for; and

2. Designing to sell.


Number two is an insight i hadn't considered, so thanks for sharing.

Isn't one of the things that generally makes Apple's products arguably great is that the hardware is designed to be true to its purpose, and the marketing campaign is an entirely separate and often equally brilliant thing, also true to its purpose...


Designing to sell - that reminds me of the last laptop I purchased. It had every gizmo turned on, endless vendor crap installed, booted up like a dog and ran worse.

There was definitely zero interest in making my experience a good one. Somebody marketing committee had gotten ahold of the product and this was the result.

1st thing I did: wipe it and reload the OS. With OS media I had to buy separately.


As a designer and sometime front-end developer, I find that writing code can be at odds with taking a user-centric point of view. I've often been tempted to re-use a view simply because I didn't want to maintain two different versions that were almost the same... surely the user wouldn't notice a difference. I also am less likely to start over when I have a better UI idea if I've already committed the first approach to git.


Programming isn't a code-production activity. It's a knowledge-acquisition activity. It was worth writing that code to learn about the approach you were thinking of. It's also worth throwing it out once you see that there's a better UI idea.


I also disagree with his “inherent ability” argument. I think that interface design is a learnable skill and can be improved through practice. Which of us is right? We’ll never know, as it’s not really a testable assertion.

In my opinion, that makes it a pretty weak argument that detracts from the rest of his article. Besides that, it could have the effect of discouraging people from trying, as they’ll just think “Oh well I can never do that because I don’t have the genes, or whatever.”

My opinion is that programmers in particular have a hard time with design because they’re not presented with a systematic way of thinking about it. That’s what I’ve started to try to do with my site http://www.visualmess.com . It tries to instill the “user-centric” mentality and provide a framework for thinking about design based on the brain’s visual information processing capabilities.


This is a great post to get people thinking about the amount of time and effort that's required to build a great user interface and work flow scenarios.

It took me a long time to appreciate the difference a great interface makes for an application. For me the problem comes from the original team that start building out product. They are working on sweat equity, don't have a $1 to spare and are rushing to market. If they don't have a designer among them, the design and usability gets moved to 'When get some traction, we'll redo this' and it never gets re-done and more items are piled on top of the quickly made UI. By the time the product ships the UI is almost unusable.


Back in the days of text mode applications, we do not complain about user interface design. Either it is seen as a lesser problem, or text mode itself is better for presenting user interfaces.

I'd argue that the latter is true.

Firstly, in text mode, screen estate is limited. One has to be careful in what one chooses to display on the main menus.

Secondly, we already program in text mode. Creating text mode menus poses less cognitive dissonance for the programmer than dragging and dropping radio buttons and menu bars.

The bad user interfaces that I've seen usually present all available options within the main screen, and it ends up being a crapfest of controls with no obvious guidance to the poor user as to what is a sensible next step.

Text menus also present a very structured way for users to explore a program. If you made a mistake, you have a better chance of retracing your steps. In contrast, WIMP systems may present multiple entry points for a single functionality, with little information on how to undo one's changes. Toolbars are the worst in this respect. The button that one clicks to "perform" the action is not the same button one might use to "undo" the action.


The central problem is lack of incentives, or misaligned incentives. It has relatively little to do with the character of the developers. Developers are generally not stupid. Many of them could learn to be better designers, and they could certainly learn to consult with better designers. But why should they?

If software is sold based on a checklist of features then it is more important to build ten ugly features than one beautiful one. If software is sold based on beautiful demos then it pays to make the demo awesome but skimp on the corner cases. If software is sold based on gorgeous screenshots and icons then the median icon or screenshot will be gorgeous, even if the app itself is missing or broken. (Even the broken iOS apps have beautiful, beautiful icons.)

If your company has so many layers of management that the feedback from end users never makes it back to your desk, you'll generate designs that please your management.

If you're at a bootstrapped startup, searching for product-market fit, it pays to design incrementally. Even if incremental design is the wrong approach for the problem, it still pays to design incrementally, because a better-designed app with no market is a disaster. That means that there are many theoretically possible designs that are difficult or impossible to achieve in practice.

The trick to improving the design of software is to figure out how to make it pay. A major enemy is the rate at which the landscape changes. Computing still changes so quickly that there's a premium on disposablility. The jury-rigged contraption built of metaphorical duct tape may be barely usable, but it's cheap and quick to build, and in a year nobody will care: They'll have traded it in for the next model, or the platform will be obsolete and everyone will have to move, or you will switch jobs (perhaps by selling the product to Cisco) and you won't care anymore.


How about iterating based on what the users are actually doing on the UI (or website?). PG said in one of his articles if you don't know how to design, keep it really simple. That's probably a starting point. As a developer learning to design I find myself thinking in terms of data rather than user experience.


Well, if Johnny is very good at algorithms, then the stereotype says Johnny is not very good at interacting with people and, in general, not good at figuring out what's going on in other people's heads.

I might be biased, but I always held this as axiomatic truth.


The link to the worse web sites includes:

> http://art.yale.edu/

Oh how very, very ironic.

The Yale school of Art.

Looks more like the toddlers school of scribbling.


I don't think some people are aware an entire discipline exists within Software Engineering called HCI [1]. Is the distinction made between graphic design and ui design? I'm not so sure. But this article stinks of ignorance as to what UI design really is.

[1] http://en.wikipedia.org/wiki/Human-computer_interaction


Did my sentence "...and most software engineers don’t understand the value of true human factors engineering — in particular the cognitive psychology and human-computer interaction expertise that human factors engineers bring to user interface design" not show that I am aware of HCI?


Most software engineers do not need a deep understanding of Human Factors as a working tool, it is a broad term covering multiple disciplines and mediums - particularly focused on physical forms. HCI is focused on computer interaction; you are discussing user interface design.


Citation needed. The only thing I've ever seen or heard of even /vaguely/ concerning cognitive psychology and user interfaces is bikeshedding about response time.

Usually it's just: some graphic design expertise, and WoW addict like dedication to smoothing out the details.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: