Victoria Bellotti Interview – HCI Research and Practice: Past, Present and Future

Introduction to the Interview

I have always argued for the strongest possible relations between HCI research and HCI practice. For example: ‘The HCI discipline can be summarised as: the use of HCI knowledge (acquired by research) to support practices (of design and evaluation) seeking solutions to the general problem of HCI (humans interacting with computers to perform work effectively) (Long and Dowell, 1989). My own research (along with many others) has acquired such knowledge (in my case) in the form of design models, methods and principles to support practice.

However, I have always been acutely aware of the gap between knowledge acquired by HCI research and its application by designers. Bellotti reported empirical support for such a gap in her study of commercial system interface design projects (1988). She concluded that: ‘The study suggests that HCI design and evaluation techniques, although potentially valuable to commercial design, are not applied in practice’. A comparable study of London HCI Centre designers (the commercial arm of the Ergonomics Unit) reached the same conclusion. Later, in 1997, I cited Bellotti’s paper, concluding that: ‘In terms of the capability maturity model (Paulk et al., 1993), HCI fails to support design practices, which are either ‘defined’, ‘repeatable’, ‘managed’ or ‘optimised’.

 

My view has remained unchanged, in spite of the intervening years. In my 2010 Festschrift, I wrote: ‘….I would like to celebrate the world of HCI. Obviously, students, practitioners and researchers, who identify themselves with HCI and who together make up the HCI community. But also IT professionals, outside the community, who do not identify with HCI; but who actually design so many of the interfaces in use to-day. Most IT interfaces continue to be designed and implemented by such professionals. We forget them at our (professional) peril.’

 

Further, I expressed the hope that: ‘HCI research improves the effectiveness of the design knowledge, which it acquires to support HCI design practices (a hope shared by festschrift authors – knowledge, which is ‘more assured’ (Carroll), ‘more reliable’ (Dix) and ‘offering a better guarantee’ (Hill)). Anyone who doubts this need should seriously consider: 1. How much interface design is performed by IT professionals, outside the HCI community; 2. How little HCI actual design, as opposed to related studies or evaluation, is carried out by individual HCI practitioners (as consultants) or even by those working as teams in large organisations; and 3. How much design is performed with little or no reference to HCI design knowledge (of any or no conception), other than perhaps evaluation. But how is this much-needed improvement in HCI design knowledge to be achieved? In my view, it can only come about, if HCI research and practice diagnose more design problems and prescribe more design solutions and in so doing evaluate the effectiveness of HCI design knowledge (of whatever kind).’

 

However, lots of HCI water has passed under the bridge in the last 25 years or so. Also, I am a researcher and not a practising designer. A timely reconsideration of the relations between HCI research and practice – past, present and future, seemed a good idea for the website to address. By sheer good fortune, I was in contact with Victoria Bellotti at this time and although she did not have the time or the motivation to contribute a paper on the subject, she did agree to give a written interview. Victoria is both a very successful researcher and designer and, of course, set the ball rolling in 1988 with her paper cited earlier. We should be so lucky.

John:

Victoria Hi and thanks for being interviewed.

Introduction to Victoria Bellotti

Following a good degree in Psychology, also at UCL, Victoria obtained her MSc in Ergonomics in the class of 1984/85. She was obviously a bright and capable student, although as her Director of Studies, I thought she could be doing better and said so. Never less than forthright, Victoria thought otherwise and considered that she was doing well enough under the ‘prevailing circumstances’ and said so. She turned out to be right. And of course, she has never looked back ( see also 1983/84 Victoria Bellotti – MSc Reflections). I have known her on and off, since this time. Mostly off it must be said, although I have followed her progress with interest. We have met at various conferences, continuing to exchange forthright views and spilling beans of different sorts. Of late, we have been in e-mail contact about this and that. Victoria is a real all rounder. There is not much she cannot do. My abiding memory of Victoria is of her playing (I think jazz) violin at Chat’s Palace. Those were the days.

John (1) 1 October 2014:

 To kick off, I would like to take you back some 25 years (yes, that long ago) to your 1988 paper, presented at the HCI SG conference at Manchester, entitled ‘Implications of Current Design Practice for the Use of HCI techniques’. You concluded that:

 ‘This study suggests that HCI Design and Evaluative Techniques (DETs), although potentially valuable to commercial design, are not applied in practice. The design environment conditions required for the successful application of current HCI DETs do not appear to be satisfied by commercial design projects. The reason for this is the existence of unavoidable constraints in commercial design which future HCI DETs should try to cater for.’

 

The conclusion created quite a stir, as I remember it. My question, then, is looking back on the study itself, the paper and its reception, what did you think of these different aspects then and now. Please feel  free to respond to the question in its two parts or in any other way, as you see fit. Over to you.

 

Victoria (1) 11 October 2014:

 

At the time I found that HCI DETs (that’s design and evaluation techniques) were complex to understand and slow to apply and hard to get the analysis right. Meanwhile software engineering projects had tight schedule constraints that left no time for a usability engineering team of typical size to get the HCI DET work done before the UX design was going to be fixed. Those were the waterfall model of software engineering days where design decisions were locked in at the beginning so the UX team would have barely any notice after the start of the project to do all their work. The sad thing was that so many HCI people were working on these modeling approaches and yet no one was ever going to apply them in practice. This might be why the Medical Research Council pulled the funding for the Cambridge Applied Psychology Unit (APU) that continued to invest heavily in such work for years after I published my paper. And ironically, in 1989, I was hired at EuroPARC for a job that involved collaborating with the APU which was trying to apply such modeling approaches to design practice. Since I really wanted to work at EuroPARC, I took the job and so had to keep doing it, even after I had published a paper saying it wasn’t likely to be feasible. I stayed at EuroPARC for 5 years and much of my time was spent working on this problem, funded by a European grant. Unfortunately the project, and it was a big project too, never led to a tool that anyone used for designing a product. I was not surprised.

 

Anyway, today, those supposedly compressed timelines now seem ridiculously long. We have agile methods or lean start-up and UX development toolkits and libraries and you can get a system up and running for user testing in a few days if you’re not trying to do anything particularly novel. So it’s much faster to test with real users than it is to do anything theoretical. And time is money. The longer you delay product release, the more you fall behind your competition, so the slightly higher expense of recruiting people to try your latest greatest new application is well worth it to expose your UX flaws as quickly as possible. Moreover, the UX toolkits stop you from making a lot of dumb UX errors anyway, since all the low-level design work is done. And users are rarely naïve these days, so they already know how to use all those widgets. So everything has moved on a long way (fortunately) from where we were back then.

 

 

John (2) 26 October, 2014

 

Victoria Hi!

Thanks for this. I really enjoyed reading and thinking about your first contribution. Even brought back some memories of my APU research days. Off to a good start, then?

 

Actually, your response raised a whole range of issues, concerning HCI Research and Practice. To keep the interview manageable, however, I have reduced them to three questions, all reflecting research, practice and their relations. Here is the first question:

1. You make it quite clear that from their inception, the initial design and evaluation techniques, proposed by HCI researchers were ‘complex to understand and slow to apply and hard to get the analysis right’ for UX practitioners. In addition, many HCI researchers worked on modelling approaches, which were never applied in practice. In contrast, ‘agile methods or lean start-up and UX development toolkits and libraries, currently support early user testing and so obviate the need for the modelling approaches, cited above (even assuming they could be applied).

My question here is: how did the methods and tool kits, currently applied by UX practitioners, get to be developed – for example, in terms of activities – research, development, practice or some combination? Also, what sorts of professionals developed the methods and tools (psychologists, software engineers, UX practitioners themselves, inventors etc) and in what kind of institutions (academia, industry, start-ups etc)? Lastly, who provided the funding?

 

Victoria (2) 31 October, 2014

This is a tough one, since I wasn’t there, but my hunch is that UX practitioners and engineers developed these things as they worked on commercial projects by improvising time and time again and stuck with the things that worked and then refined them. So the funding wasn’t research-oriented, but from employers or customers who just wanted a product and didn’t really care to look inside the black-box of development methods as long as the end result was good. After writing this, I Googled the author of the eXtreme Programming books, Kent Beck. He was very influential in my own philosophy about UX design. There is an interview herehttp://accu.org/index.php/journals/509 where he talks about where his ideas came from. In collaboration with Ian Smith at PARC, I used a lot of these ideas when developing our email-based personal information management prototypes back around 2001. Anyway, it seems my hunch was at least partially correct. It’s clear that Kent reads and exchanges ideas with other smart folk and applies what he discovers to his work in the commercial sector. So other thinkers provided ideas and then he figured out how to apply them and integrate them into a coherent approach. I’m sure this applies to many of today’s innovators.

 

Likewise, Eric Ries, originator of the highly influential Lean Startup movement, whom I greatly admire, learned on the job through repeated failure. He was influenced by Steve Blank who is an advocate of user-evaluation (i.e., get people to try things as soon as possible in the design process). So, the heart of Eric’s Lean Startup philosophy is about reducing risk by testing your biggest assumptions/hypotheses first with real users, preferably in a realistic setting. If you are wrong, then you pivot to a new hypothesis.  And then you keep on testing and pivoting as you refine your product idea.

 

So, to sum up, I think necessity and communication have been the parents of the best inventions in UX development philosophies. People working under pressure with limited resources had to try things out in the design and development process and then learn from costly mistakes they made. They were also influenced by other people who had lots of valuable experience from making and learning from their own mistakes and who also had learned from other people.

 

I hope this satisfies…

 

John (3) 1 November 2014

More than satisfies. It is quite fascinating. It raises a whole host of issues and threads that we might pursue later in the interview. However, at this stage I would like to pursue for the sake of continuity the second of my three initial questions, which is:

2. Given the above changes to UX practice over time, what is the current relationship between software engineers and UX practitioners? If system development includes both specification and implementation, then I assume software practitioners, at least, do the latter. But who does the design and evaluation – SE or UX or some combination? The question is important, because if research is to support practice, then the types of practice need to be identified.

 

Victoria (3) 7 November 2014

 

As far as I understand practice in commercial settings, where systems are being built to support human interaction, it’s the UX people that do the design and the engineers implement what they design. UX people have a lot more power in the design process than they used to back in the 80s and 90s. The standard best practice process is to do user research to understand relevant practices, then to use rough sketches and obtain feedback from target user representatives. However, one or both of these steps might be skipped if the development team is supremely confident about what they are developing (e.g., if they aren’t making something completely novel, or they are very inexperienced and buoyed along by hubris).

Then someone normally has to work out the logic of the screenflows for the user paths through the system and this is generally a UX person’s job. Lots of tools are available today to support wireframing, where the UX person creates a sketch of the elements of the system before they even do the proper graphic design of the look-and-feel. The logic of the interaction with the required interaction elements get worked out first. Then the GUI with its buttons and icons (as image files, of which there are plenty of libraries online) gets fleshed out. Then engineers put the behaviour behind the graphical elements.
However, things are already changing with development toolkits that are making it unnecessary to do basic design, providing a lot of the building blocks, which you just configure. There are platforms that require a lot of expertise like Drupal that supports web application development all the way to very sophisticated platforms like Grid (currently not yet released) which claims to use AI to design your website for you. Apparently you just pour in your content. So, depending on what people are trying to do, and how innovative they want to be, it’s getting very diverse and you may not need UX people or even developers at all if you want a very standardized experience
In Lean Start-up, an approach to development which is becoming extremely popular in web services and consumer applications, the idea is simply to define a minimum viable product first, which only has the most basic features required to test your biggest assumption(s), by seeing how to see how people react to it or them. Only if people sign-up and start trying to use your system, does it get fleshed out properly. The most lean example I have seen is from Mark Pincus, former CEO and founder of Zynga who used to put a button up on a web page offering a new game. If people clicked on it, only then would he start to develop the game. Confused clickers would just see an error message until it got built.
John (4) 1 December 2014
So, more radical changes on the UX front, even without much mention of research. Again, I want to continue with my third original question and pick up the threads, exposed here, a bit later. The question is:
3. Research acquires knowledge to support practice. Knowledge is necessarily both substantive/declarative (knowing what) and procedural/ methodological (knowing how). Agile methods, lean-start-up and UX development toolkits and libraries are applied to specify designs, using some representation and resulting in some products or other (for example, wire frames, simulations etc). The latter are models by any other name (albeit of a certain sort). The question arises, then, what makes these models (and methods) easier to understand and quicker to apply and easier to get the analysis right, than the earlier modelling approaches, which were never applied in practice. Further, whether the current model and methods are acceptably effective. Do they do a good enough job for the purposes in hand?
Victoria (4) 3 February, 2015

This question is the toughest, because I don’t feel I have a very good handle on what’s going on out there in the agile and lean start-up community (mainly because it’s simply huge now and they don’t publish for the most part; they just make stuff). Suffice it to say, that I see a lot of start-ups that have pretty good user experience design and that are based around nice ideas that address real problems. Of course, there is a lot of competition for almost anything you can think of, so each start-up has to do the best it can to appeal to over-stimulated users with so many new things to try. There are also a lot of awful new ideas too; but they don’t usually grab my attention. They are just lost in the innovation cacophony.

I think where the new tools beat the old ones is that they were developed by practitioners, or rather entrepreneurs and start-up employees, who distilled them from their own experience of what worked for them, rather than by academics, who never met a product developer in their professional lifetime. The academics were talking an entirely different language with terms like: user conceptual models; internal and external consistency; abstractions; grammar; goals, operators, methods and selection rules. None of these terms was correctly interpretable by anyone without a degree in HCI. And the models were mostly being communicated through CHI papers or tossed over the wall in internal reports to product divisions in the corporations that employed some of the early HCI researchers. If the HCI folk had been embedded in the product divisions, perhaps they would have made their models more user-friendly for their target market – the engineers.

Modern tools make it easy to quickly mock-up an interactive prototype of your application, that looks just like a real application and even runs on a mobile phone. It’s almost not worth even bothering with a wire-frame, because the new tools are so lightweight and you don’t have to worry so much about making detailed decisions about how to enable any given interaction. The reason for this is that we now have user interface design ‘patterns’, that have been established as solutions for a multitude of interactions, that have been found to work well through observation of actual use. So, essentially, the new way of designing your user experience is to copy as much as possible from tried-and-tested ways to take users through routine interactions such as: joining a service; browsing content; filling out a profile; making a payment, etc. You can even find web sites dedicated to sharing these patterns. Over time, standards are being established that reduce the amount of effort that goes into designing your UX and that allow users to develop expectations after being exposed to the same patterns again and again.

Perhaps some of the designers that use these patterns don’t understand why they work, as someone trained in formal HCI might, for example, they might not realise that an accordion menu visually anchors navigation in context, giving users confidence in tracking their progress down through a hierarchy, thus removing uncertainty and a strain on memory (where was I just now?). And perhaps this means that we may occasionally see ‘creative’ design bloopers in otherwise slick and easy-to-use services, where designers strayed from the beaten path. But for the most part these established examples are doing a fine job of allowing thousands of design teams to create intuitive services, that anyone can quickly learn to use. For example, my 83-year-old mother was able to create her own account on Streetlife.com recently without any help from me… I can’t recall that sort of intuitiveness being much apparent in the new software products of the 80s.

 

John (4) 22 February, 2015

 

Victoria Hi!

 

Thanks for this. More interesting insights about HCI/UX practice, which seems to have been getting a very good airing, so far in this interview. You are right, however, things have moved on a lot, since the DETs days. Good on your mother!

At some point, I would like to get on to research, so that we can discuss it with respect to what you have been saying about UX design practice. However, no hurry. Before so doing, I think it would be interesting to hear a bit about what you actually do in terms of research and practice at this time and how it relates to the past. PhD student, Amodeus researcher, Apple Research Scientist, Palo Alto Senior Scientist and PARC Principal Scientist, all make it sound a pretty ‘researchy’ career path (or perhaps better ‘research and developmenty’ research path). I think a brief overview would help us to understand better, where you are coming from, so to speak. It would also serve to set the scene in a timely manner for us to move on to a discussion of HCI/UX research. What do you think? Alternatively?