Journal of Ergonomics

Journal of Ergonomics
Open Access

ISSN: 2165-7556

+44 1300 500008

Research - (2017) Volume 7, Issue 3

k Reasons Why Ergonomics Cannot Make Interactive Devices to Be User-Friendly (k>3)

Bouwhuis DG*
Department of Industrial Engineering and Innovation Sciences, University of Technology Eindhoven, Eindhoven, Netherlands
*Corresponding Author: Bouwhuis DG, Department of Industrial Engineering and Innovation Sciences, Human Technology Interaction, IPO 0.24, University of Technology Eindhoven, P.O. Box 513, 5600 MB, Eindhoven, Netherlands, Tel: +31 40 24157490 Email:

Abstract

Interactive interfaces are encroaching on ever more devices in our daily life activities. They appear on set top boxes, TV sets, hand-held telephones, washing machines, kitchen ovens, home thermostats, the car navigation unit and parking meters, to mention only a few. Despite the fact that practically all of them feature a menu driven interface that has been around since 1995, their usability has not improved and with the continuous increase there is even a tendency to become more inscrutable, rather than less. At the same time human factors engineering and ergonomics have become serious disciplines, widely endorsed and the associated expertise is easily available and accessible. Apparently, all scientific knowledge about human interactive behaviour does not find its way in creating transparent interfaces in many products for private use and public systems. It is argued here that this situation is caused by a number of reasons, that that are almost impossible to eliminate. In this paper five reasons are discussed, some of which are related, such that with another definition fewer reasons might emerge, but three reasons seem to be the minimum. The reasons originate from the software life cycle, cognitive models and on beliefs of the stakeholders. Acknowledging that in the current industrial and economic context the current generation of interactive interfaces will necessarily have a problematic usability, some measures for improvement are mentioned, e.g. regulation or standardization, like the familiar ones existing for electrical, radiation and medical safety.

Keywords: Interactive interfaces; Design; Cognitive models; Menu generation; Usability

Introduction

In recent years communication with interactive devices has quickly taken over from other forms of communication, be it face-to-face, written, printed or by telephone. An important reason for this is that communication with an interactive device replaces human effort with associated cost savings. One other reason, more often mentioned by service providers than the cost savings, is the direct contact, with 24/7 availability, and the possibility of wider functionality and more detailed control.

The latter property, human control, is behaviourally speaking not always desirable. For example, early automobiles featured a choke handle to ease the starting procedure, but from the sixties the automatic choke made it superfluous, with hardly a complaint from the side of motorists. Nowadays cars may feature automatic windscreen wipers, light switches, seat adjustment, climate control, and street-side parking, not to speak of the navigation equipment. All of these functions have been performed by human drivers for decades, but eliminating these types of human control is not regarded as a loss, but rather as a desirable luxury.

Also, human control may be too inaccurate for tasks requiring high precision or precise timing. For such tasks automation and consequential absence of human control is highly endorsed.

On the other hand, there are many new functions that human users would like to, or have to control, such as the temperature in the home, choosing the desired TV program or channel, uploading recent maps in the navigation unit, setting the right program for the kitchen oven, measuring the glucose level for diabetics, initializing the new Hi-Fi Audio and Home Cinema set, or paying at the parking meter for a parking spot. This is not nearly an exhaustive list, and as time goes by many new functions will be introduced and ever more devices will sport an interactive control interface, while some mechanical functions will disappear altogether.

Progress in micro-electronics and digital engineering has made it possible to equip many products with an interactive control interface that replace traditional control procedures.

But control procedures have also changed in the 20th century: Docampo Rama et al. [1] distinguished four technology generations: The mechanical generation, (up to 1930), the electromechanical generation (1930-1985), the display generation (1985-1995) and the menu generation (after 1995). The idea behind such generations is that people growing up with a particular style of tool control during, what is called their formative period, grow accustomed to such a style and at later age will have problems with a transition to other forms of control. Indeed, it was found [1] that adoption of a new style of control led to a step-wise increase in errors, whereas speed of control increased continuously with age.

Though the term ‘generations’ may suggest otherwise, usability problems are only indirectly related with aging. Freudenthal [2] found that teenagers and older subjects had similar problems in learning and using a new type of TV-VCR, the only difference being that older subjects were somewhat slower. Also results on a more diversified group of users, consisting of kids (3-12), teenagers (13-17), college students (18-24) and adults (25-64) reported by Loranger and Nielsen [3] show that at whatever age usability problems do occur. Usability problems diminish with experience, though not always for all properties of the interface-, and specific experience is what makes a technology generation. No one, therefore, is exempt of encountering usability problems in some interfaces at least.

A basic determinant of the advent of technology generations is the concept of ‘interface’. The term interface was as early as 1874 defined in the field of hydrodynamics as “a plane surface regarded as the common boundary of two bodies”. Analogously, the control layer between a computer and the human operator was also referred to as ‘interface’. This control layer deviated in a fundamental way from earlier types of control, in that the relation between human action and form of actuation was not fixed anymore. Otherwise stated, pressing a button can lead to different results depending on the context. The simplest example is the toggle switch, which can turn on a system, but activated a second time switches it off. The type of control where the rigid coupling between control elements and function vanished was typified by Docampo Rama and van der Kaaden [4] as the software generation. From the software generation, further developments led to yet other and diverse forms of interfaces, but all of those feature these so-called soft controls, that are so different from mechanical controls with their direct coupling to functions.

From their inception, the software-generation interfaces have been associated with usability problems of a different kind than those of the mechanical interfaces. The latter types require generally more force in controlling them, and frequently also manual dexterity, but there is no uncertainty concerning the link between control and function. In modern interfaces, however, it is often not clear what the control elements are, or where they are, and more often what the possible functions are, leave alone what the relation is between those.

A relatively recent example is the parking meter which is quickly becoming the most widespread public system. A usability analysis of parking meters in nine different countries was performed by Pierson et al. [5] in which it appeared that no two parking meters were the same, and that the complexity of meters varied considerably. Though the analysis was restricted to the physical interaction and the information value of the displays, and many usability errors were found, the usability situation is still more serious than reported. In a single country and even in a single town there may be many different parking meters that do not show any similarity. In general, this also holds for payment systems that may vary from using a special card cash payment with only coins or including banknotes, debit card with, or without entering a PIN code, or special prepaid coins. Some parking meters do not have display illumination, and are hard to decipher at night without street lights. The converse problem is that sometimes parking meters are exposed to direct sunlight, which can make the displayed information practically unreadable. The main problem, common to all parking meters, is that the action sequence in the user dialogue is mostly unclear, leaving the user unsure of what to do next.

Human Factors Engineering of Products

Traditionally Human Factors Engineering dealt with improving the work place; fitting the job to the worker. This could, and can be done by improving or changing the equipment, the task, the environment and training [6]. In fact, the concentration on the workplace goes back as far as Taylor [7]. Taylor started by devising different shovels for different kinds of material handling at Bethlehem Steel works and so achieved impressive increases in productivity. Hendrick [8] provides a comprehensive overview of the achievements of Human Factors Engineering at the end of the 20th century. Practically all examples treated belong to the category of production ergonomics: Measures to increase worker productivity, and reduction of workers’ injuries. Overall the introduction of Human Factors Engineering results in a cost-benefit ratio of about 10 in very different production organizations. Many of the human factors principles were already gradually finding their way in legislation. In the EEC, (then European Community) now European Union directive 89/391/EEC (1989) on measures to improve safety and health at work was issued, and has currently been ratified by 24 countries.

In the medical field human factors engineering entered more recently, and, in addition also has to take into account patient safety. As an example, Copeland and Willing-Pichs [9] discuss the redesign of a thermal ablation device, where of the six major changes five relate to physical modifications and only one deal with the GUI navigation. It is interesting to note that the original graphic user interface (GUI): “Involved excessive steps to set up and use that required multiple component connections, continual visual monitoring and excessive mental and physical burden”.

This last example illustrates how advances in technology pushed human factors engineering from production ergonomics towards product ergonomics. The arrival of new types of product widened the circle of users dramatically, expanding from production workers to citizens and consumers. The traditional emphasis on production ergonomics is still visible in the ISO usability standard (1998) that defines usability as: “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Many consumer products are not easily specifiable in terms of effectiveness and efficiency, while, rather than the product itself. It is mostly what it produces can be satisfying. In this vein Kahneman introduced the term “Hedonic Psychology” [10] which has been applied to user interfaces by Hassenzahl [11]. Hedonic interfaces are no longer work oriented, for which a cost-benefit ratio can be computed, but relate to ‘product appealingness’. It is, then, the function of the product which creates a ‘Quality of Experience’. Quality of experience is hard to quantify, and is also a far more subjective notion than e.g. efficiency, which makes it hard to predict the success of a product. A next development is that with the advent of the web, many products do not exist as a physical device anymore, like computer games or a map navigation program. In such a case there are basically four areas for design: visual design, auditory design, dialogue design and user action repertoire, where the latter has been reduced to swiping and pressing buttons.

There is, however one instance where products still have a physical form, like the aforementioned parking meters, ticket vending machines, ATM’s and other public systems. The user action repertoire here is reduced to pressing buttons and in many cases manipulating one or two cards.

The uncertainty concerning actionable features and functions puts the burden on the user-product, or rather user-system dialogue. It is exactly at this point that many products fail the most elementary usability criteria. It is clear that visual design may assist the user-system dialogue, but it often makes it less transparent, and so can be very confusing. ATM’s are an example where the usability has improved considerably since their first arrival, but the number of manufacturers of such systems is limited, and the scale of distribution is such that gradually usability was increasing as a result of customer complaints and efficiency measures.

As argued before, interactive interfaces appear on almost any product with more than an elementary functionality, and so it is mandatory for every citizen to deal with the ever-growing number of interactive interfaces, that are intended to support the consumer, and in actual practice often limit effectiveness, efficiency, satisfaction and quality of experience.

Basic Principles in Human Factors Engineering and Ergonomics

There are two laws for the purpose of design lying at the basis of all design guidelines and that also refute common-place assumptions that are held, implicitly or explicitly, in design-oriented environments.

• The designer is not the user.

• Users are more different than anyone thinks.

The first law concerns the view that the designer has of the intended user. Unavoidably, a designer knows him or herself best of all people. It is from this knowledge of the self that the designer has to make decisions as to which design elements are perceivable, understandable and actionable. In general, the degree to which the personal experience differs from that of other people is greatly underestimated.

This leads to the second law about individual differences.

Mostly, designers, just as other people, will have an idea about the range of capabilities that is representative of the population in general. But the human population also comprises many specialists that take care of a surprisingly wide range of human needs, like dentists, optometrists, doctors, physical therapists, teachers of the blind, teachers of the deaf, rehab teachers, who, each and every one of them know a great deal more about a specific human faculty than the lay person. While these are all faculties in the physical field, differences in the cognitive field may be even more diverse and unpredictable. It is the experience of all experimental psychologists that among the group of participants in an experimental study there are always one or more people who produce a reaction that is totally unexpected and cannot be explained, despite all care taken in the experimental design, controls and instruction.

Especially with regard to public systems, like parking meters or ticket vending machines, it is e.g. highly unlikely that the designer knows important details for low vision persons that the eye specialist knows. In taking into account that there may be foreign users, not familiar with the indigenous language, English is often used as the second language on the presupposition that almost anyone in this world does know some English. However, a sentence like: “Subsequent insertion of a coin is a contravention”, cited by Pierson et al. [5] will probably not even be understood by people with only a moderate knowledge of the language. The sentence is intended to state that when you insert a coin in the machine after the parking time has expired you are breaking the (British) law.

In both of these cases, information visibility and language comprehensibility, the designer is practically always overestimating the capability of users.

Availability of human factors knowledge

In the past decades an impressive body of knowledge has been developed in the field of human factors engineering. Practically all universities with social science and/or engineering curricula will have courses or degrees in human factors, ergonomics, human-system interaction and similar. Surfing the web with the keyword ‘usability’ reveals a seemingly endless list with firms, consultancies, agencies, shops, institutes and advisory groups that specialize in usability and evaluation. There are numerous evaluation methods available, often automated to a great deal that can uncover usability problems and provide information as to improvement of the evaluated product. For the Web the World Wide Web Consortium (W3C, 2017) states it as follows:

“It is essential that the Web be accessible in order to provide equal access and equal opportunity to people with diverse abilities. Indeed, the UN Convention on the Rights of Persons with Disabilities [12] recognizes access to information and communications technologies, including the Web, as a basic human right.

Accessibility supports social inclusion for people with disabilities as well as others, such as older people, people in rural areas, and people in developing countries.”

To this end W3C formed the Web Accessibility Initiative (WAI, 2017) that provides guidelines for Web Content Accessibility (WCAG). The standard WCAG 2.0) has been recognized by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as ISO/IEC 40500:2012.

Before that, after a long gestation period, the ISO usability standard ISO 9241 has been published in 1992. Since then many additions were added, and together they provide the most comprehensive source on human factors guidelines for interactive systems. Central to the ISO usability standard are the three evaluation variables: Effectiveness, Efficiency and Satisfaction. For medical devices there is the standard ISO/IEC 62366-1 [13], that adds ease of User learning to the evaluation variables.

When observing the use of one or more instances of the range of products mentioned above, it is clear that these can be quite ineffective, are certainly not efficient, are not easily learned and do not lead to user satisfaction.

The design

There are many different definitions of ‘design’, as well as descriptions of the design process. A more global description of design that seems applicable to interactive products is given by Cox as: “‘Design’ is what links creativity and innovation. It shapes ideas to become practical and attractive propositions for users or customers” [12].

The problem with this description, however, is that neither creativity, nor innovation can be unambiguously specified or quantitatively applied in a design process. Both concepts are in fact emerging properties from a design; rather output than input.

Looking more closely at the development of an interactive product, it entails a problem-solving process [14] where the goal is technically well-specified, but the ways in which it can be attained are multiple. Ullman [15] describes design as an iterative decision process, moving the design idea in consecutive steps toward the end goal. But a decision is trivial if it was not associated with uncertainty, and consequently incomplete knowledge and ambiguity may put the design process at risk [16]. This holds especially for multidisciplinary products, like interactive devices, in which electronics, mechatronics, security, content provision and human communication rules have to be combined. Considering the plethora of so-called intelligent devices that only function after some sort of user-system dialogue, this multi-disciplinarily does hardly lead to understandable products with a transparent interface and high usability.

Human Factors and the Design Process

If the knowledge and the expertise in human factors engineering are so widely spread and accessible, the most basic question is then why such knowledge does practically never permeate in commercial products and public systems. There are a number of reasons for this state of affairs, which will be discussed next. These reasons are not always independent of each other, some are related, and that is why an exact number of reasons cannot easily be given. It should also be noted at the outset, that scientists in the field of human communication very rarely participate in the actual design of commercial products, whereas the technical system designers will not be active in the scientific discipline of human communication theory. Whereas currently most decision makers, industry leaders and politicians emphasize the necessity of product usability, the question really is whether usability is a concrete, clearly delineated concept that can be ported to any arbitrary product or system. Experience teaches us that this is not the case. In the words of Hendrick [8] human factors is not a common sense issue. It is not something elementary that you can just add to a product after which it all of a sudden becomes user friendly.

Though understandable, there are few, if any, university and college software development curricula in which a serious course of Human Factors engineering has been adopted. The prejudice that Human Factors Engineering is a matter of common sense, and not justifiable against the pressing need of knowledge acquisition of new software developments, effectively prevents more than lip service to it. So, while there have been many attempts to create design processes that involve the participation of human factors engineering in product design, most of which would probably be effective, they will in all likelihood not be followed in the design of those things that we dearly need, but do not understand.

The design project: time

For every new product a design team is assembled, headed by a project manager, who practically always has a technical background, e.g. in mechanical, electrical or software engineering. The design can be seen as a project in which a number of milestones are set, usually dates at which one part has to be finished, or a new activity starts. The project also has a fixed end date, after which a beta version of the developed product is intended to be released. Of all activities in a company product development is one of the most expensive. None of the activities in the design project returns any money, while costly staff does not contribute to the regular company production. Quite often, therefore, students are employed on a temporary basis to reduce costs. It was e.g. students who had been programming the HP45 scientific calculator HP museum [17] who revealed that by pressing a combination of keys a millisecond timer could be started that was not part of the specified built-in functions. The task of the project manager is to adhere strictly to the time schedule, as any delay will incur costs that have not been budgeted. This means that when sometimes critical information needed for a nest step is not yet available, the next step is started anyway, on the basis of an estimate of what that information might be. Temporal schedule adherence is also the reason why generally end user input in the design is avoided. Any ergonomic evaluation of early design solutions might enforce costly redesign with associated time loss. In actual practice, though, 74% of software development programs have time overruns [18].

Perhaps remarkably, user involvement in early design phases has been shown to reduce time overrun rather than increase it. Rauterberg et al. [19] found with that user participation in the design phase time overruns were significantly reduced (p ≤ 0.02) in relation to no user participation.

Intuitively, user participation with uncertain outcomes presents a risk for timing, which is exacerbated by the realization that most design project suffers from time overrun. From decision theory [20] it is well known that people try to avoid risk, a concept called risk aversion. This holds that people will prefer to avoid a loss, rather than obtain a gain, each having the same probability. In this case, then, risk aversion argues against user participation.

One other reason why time is deemed to be important is competition with other companies designing a similar product. Having a product earlier on the market can be expected to lead to a higher and earlier return of investment.

The design project: cost

The cost of a design project is related to its time duration, but has different boundary conditions. Cost sets a hard limit to the investment in a design project, which may make the design project infeasible for smaller companies. With respect to costs it has been established that 59% of design projects has a cost overrun [18]. The hard limit of cost expenditure can often be offset in the case of public systems where institutional customers are charged more due to unforeseen circumstances. Yet, involving end user participation will increase cost, which is why in view of a competitive expected product price, this is often avoided. Again Rauterberg et al. [19] found that this is just as untrue as for time. In their investigation they found that whereas in cases of a 90% cost overrun without user participation, this was reduced to below 30% (p ≤ 0.03) when involving end users. Here, too, it can be argued that risk avoidance [20] can explain the tendency to involve end users in the development phase. A more recent and similar analysis of the function of human factors in product development is given by Schmitt et al. [21].

The design project: cognitive models

Every designer does not only have a view of the prospective product, but also a cognitive model of how it must be used, what messages will be given, and what kind of actions must be performed. As stated before, it is not easy to generalize from your own cognitive model to those of other people. In addition, designers are well aware of the technical details and specifications of the system to be developed, which is untrue for the great majority of the intended users. This is essentially the same situation as that in the concept of technology generation. Two examples may illustrate this.

A programmer in a large multinational company had made a check-in system for the employees of his department, which about half of his co-workers used routinely while the other half claimed to be unable to understand the system. Confronted during a meeting with the difficulty of use, and hearing some of the arguments he stated that he could not understand what his colleagues did not understand. This gets at the bottom line of the explanation why cognitive models cannot always match, or not be conjectured.

The second example derives from events and decisions concerning the design of a Hi-Fi audio set that combined a digital radio tuner, a CD player, a minicassette recorder and an amplifier. The unique selling feature of the product was that the audio set had to be controlled by a single set of keys that could be coupled to all four devices, dependent on the context. Since the controls were coupled to function by means of software, this device was thought to be the first audio product of the software generation [3].

The present author was member of the design board with two-weekly meetings overseeing the design decisions and the progress. The product manager insisted on installing a double cassette deck, with the argument that 85% of cassette deck sales was double-deck, and therefore more popular. The rest of the board declined, as not only the addition of a device would increase the number of couplings by five and so complicate software, but also because the increased complexity of the interface targeted to an older segment of the population.

Even when it was agreed that there would only be a single cassette deck, it was announced on the next meeting that there would be two decks, a decision that could not be changed in view of the time pressure.

As it turned out, the audio set was a commercial failure, and singled out for its difficulty in controlling it. In the complaints it was apparent that the different functions of the same control element were a great source of confusion. It was reported to the interface designer that various users found themselves turning on a device they did not want to, e.g. the radio instead of the CD player. What happened that at every press the software program took some time for a context change, which took longer than actuating a device by mechanical means.

Thinking that the button press was not properly registered, users pressed the button another time, which again effected a context change, this time unintended.

The interface designer answered indignantly that nobody would press a button twice. Our interface evaluators, however, could state that most customers pressed buttons twice or three times, with one participant pressing the same control even ten times. This is a clear case of a fundamental mismatch between cognitive models, for which there is not a single clear solution. What is clear, though, is that early involvement of a representative group of end users could have prevented many of these difficulties. Yet, considering the diversity in the user population it is difficult to encompass all individual exceptions. The prevalence of the designers’ cognitive model in system design, which is practically unavoidable in the current industrial setting will still, perhaps unwillingly, result in ergonomically unsound products.

The design project: specification detail

Whenever a new public system is introduced for citizens, complaints about difficulty of use will occur, which is understandable because of unfamiliarity. When complaints show no sign of abatement, the customer agency will complain to the software firm, and require redesign, or a form of compensation. In practically all cases the software company can rightly claim that the requirements for interface usability were not made specific, (and that one of the founding principles of the company is to design user friendly systems). Looking into the requirement specifications for public systems it is often surprising to see how little attention is given to user-friendliness, if at all.

In the Netherlands a closed communication network for mobile communication between police, fire-brigades, ambulance and other assistive services, C2000, introduced around 2004, was designed without specific requirements for usability. In trying to replace close to one hundred older analogue networks for similar purposes the urgency of introduction prevailed over other considerations. In actual practice it appears that the workload of the individual police officer has increased, while in cases of calamities the traffic flow prevents assistive services to contact each other.

Another public system for which no usability evaluation was ordered is the Public Transport Chipcard in the Netherlands. The card is valid for all public road transport and railways and is expected to ease fare payment. Tests with the system started in 2002, and countrywide installation was realized in 2012. Part of the usability difficulties was the unfamiliarity of the debit/credit system, but over ten years improvements were made, though confusion is still not absent.

In conclusion it can be said that absence of human factor engineering expertise in institutional agencies, like governments, utility firms, municipalities and Ngo’s is an important cause for low usability interfaces in public systems.

The design process: belief and conviction

In studying the user-friendliness of products and systems it is wise to study their use in daily practice, e.g. in the home of the user. This is not a situation in a laboratory setting, where users normally behave differently than in their own home. A field that currently draws much attention is telecare, where people with dysfunctions or chronical diseases can to some degree help themselves and communicate with care providers. The field is economically important inasmuch population aging requires more care providers, of which there are increasingly fewer. In a usability study diabetic patients got a blood glucose meter and a scale to take the measurements, and had to send these by computer to the care centre [22]. To this end they got an introduction in the operation of the products and training in using them. When studying the patients using the equipment, a number of difficulties could be observed that were video recorded. There were problems with drawing blood for the glucose meter, problems with the scale that showed negative weights, and computer interface troubles.

Before this at-home evaluation study, management of the care centre held the opinion that the equipment worked successfully and that nothing stood in the way of expanding the service. Apparently, this was a matter of belief; there was no evidence on the actual success of the telecare system. After viewing the video footage in detail, they changed their mind and questioned the ergonomic quality of the products. In fact, they complained to the manufacturer of the glucose meter with reference to the observed handling difficulties. The answer of the manufacturer was that there could be no question of any changes in design as the device was ergonomically optimized and fully evaluated. This is apparently a case of conviction, though one that can easily be contradicted. However, there is no way in which the care provider can change this situation. Purchasing another glucose meter is not an attractive solution as that complicates technical service and maintenance, usually not a strong point of a care centre. In addition all glucose meters have their own and often similar usage problems.

The situation sketched is by no means unique. In Great Britain the Whole System Demonstrator [23] was the Telehealth system with the largest Randomised Control Trial in the world. Though some positive results were obtained, the actual cost reduction amounted to only 8%, while there were naturally the common complaints on the ergonomic problems of the equipment.

The largest telecare system currently in operation is the Veterans Health Administration CCHT, Care Coordination/Home Telehealth system [24]. In their overview the positive aspects of the telehealth system are clearly described, but it does not go in detail about the practical issues. It is somewhat disconcerting to note that of those diabetic patients that were considered to be good candidates for the use of glucose measuring equipment, only 25% were using it ultimately, the remainder having difficulties to use it for a range of reasons. Mahoney [25] gives a meta-analysis of a range of studies in the homes of elderly, from which a low success rate of telehealth solutions emerges. From these observations one has to conclude that managerial layers will promote telehealth solutions, for which there are many economic and social arguments, but are relatively blind for the usage difficulties among the clients. In this way, beliefs and convictions are important impediments for the development of equipment that is easy to use for a broad variety of people with, or without dysfunctions.

An interesting overview of these factors that are ‘human’ and therefore belong in this category is given by Burke et al. [26-30].

Conclusion

The message of this paper is a negative one, and, in addition, offers little hope for improvement. From this analysis it appears that it is not the ergonomic and human factors community that is to blame, but the industrial and economic system in which products are developed, together with the mainstream opinions from which production strategies emerge.

For products for private use it could be suggested to introduce legal rules for usability. In fact, there are already many legal rules on consumer safety, e.g. with respect to CRT radiation levels, or electrical safety. It seems that the advent of smart phone has led to a higher usability than its complexity and functionality would suggest, and in that sense that is a definite step forward. Many other products mentioned in the introduction, however, have not nearly risen to that level.

For medical device development the occurrence of medical errors, the potential risk for patients and the low usability of many medical products has already led to legislation as e.g. by the FDA [31]. The international regulatory community produced IEC 62633 Medical devices–Application of usability engineering to medical devices for the approval process outside the US, in which harmonization with the FDA is aimed at.

For public systems it is clear that the purchasing agencies have a responsibility to request universal usability for their systems, especially for governmental agencies. If agencies that introduce public systems would place more emphasis on high, or universal usability, that could be an important step in serving the population.

References

  1. Docampo Rama M (2001) Technology generations handling complex user interfaces. Doctoral dissertation submitted to the University of Technology Eindhoven.
  2. Freudenthal A (1999) The design of home appliances for young and old consumers. Doctoral thesis submitted to the University of Technology Delft 90: 1853-1859.
  3. Loranger H, Nielsen J (2013) Teenage Usability: Designing Teen-Targeted.
  4. Docampo Rama M, van der Kaaden F (2002) Characterisation of technology generations on the basis of user interfaces. In: Pieper R, Vaarama M, Fozard JL (eds.) Gerontechnology. Technology and aging-Starting into the third millennium. Shaker Verlag, Aachen, Germany pp: 35-53.
  5. Pierson C, Klompenhouwer M, Nieuwland J (2009) Parking Meters, need change? UX Alliance, Scribd.com.
  6. Wickens CD, Lee J, Liu Y, Gordon-Becker S (2013) Introduction to Human Factors Engineering. Upper Saddle River, NJ: Pearson Prentice Hall.
  7. Taylor FW (1911) The Principles of Scientific Management. New York: Harper and Bros.
  8. Hendrick HW (1996) Good Ergonomics is Good Economics 1996 Presidential Address HFES.
  9. http://www.ximedica.info/images/pdfs/Ximedica-3-Keys-to-Succsfully-Integrate-Human-Factors-and-Usability-into-Medical-Device-Design.pdf
  10. Kahneman D, Diener E and Schwarz N (1991) Well-Being-The Foundations of Hedonic Psychology. New York: Russell Sage Foundation.
  11. Hassenzahl M (2001) The effect of perceived hedonic quality on product appealingness. International Journal of Human-Computer Interaction 13: 481-499.
  12. Cox G (2005) Cox Review of Creativity in Business: building on the UK’s strengths. British Design Council.
  13. ISO/IEC 62366-1 (2015) Medical devices-Part 1: Application of usability engineering to medical devices.
  14. Newell A, Simon HA (1972) Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
  15. Ullman DG (2001) Robust decision-making for engineering design. J Eng Design 12: 3-13.
  16. Stacey M, Eckert C (2003) Against ambiguity. Computer Supported Cooperative Work (CSCW).
  17. Standish Group (2004) CHAOS Report. West Yarmouth, Massachusetts: Standish Group.
  18. Rauterberg M, Strohm O, Kirsch C (1995) Benefits of user-oriented software development based on an iterative cyclic process model for simultaneous engineering. International J Indus Ergon 16: 391-410.
  19. Peterson M (2009) An Introduction to Decision Theory. Cambridge: Cambridge University Press.
  20. Schmitt R, Falk B, Stiller S, Heinrichs V (2015) Human Factors in Product Development and Design. In Brecher C (edn) Advances in Production Technology. Lecture Notes in Production Engineering pp: 201-211.
  21. Berentsen J, Meesters LMJ, Vergouwen RJM (2004) Addressing Usability Issues in Telemedicine Applications. Report Human-Technology Interaction group, Deptartment of IE and IS, University of Technology Eindhoven, Netherlands.
  22. http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_131689.pdf
  23. Darkins A, Ryan P, Kobb R, Foster L, Edmondson E, et al. (2008) Care Coordination/Home Telehealth: the systematic implementation of health informatics, home telehealth, and disease management to support the care of veteran patients with chronic conditions. Tel Med J E-health 14: 1118-1126
  24. Mahoney DF (2010) An Evidence-Based Adoption of Technology Model for Remote Monitoring of Elders’ Daily Activities. Ageing Intern 36: 66-81.
  25. Burke R, Kenney K, Kott K, Pflueger K (2001) Success or Failure: Human Factors in Implementing New Systems.
  26. European Economic Community  (EEC) (1989) Council Directive 89/391/EEC-measures to improve the safety and health of workers at work.
  27. Ergonomic requirements for office work with visual display terminals (VDTs) — Part 11: Guidance on usability.
  28. Walter M, Storch M, Wartzack S (2014) On uncertainties in simulations in engineering design: A statistical tolerance analysis application. Simulation 90: 547-559.
  29. Food and Drug Administration (FDA) (2011) Applying Human factors and Usability Engineering to Optimize Medical Device Design.
Citation: Bouwhuis DG (2017) k Reasons Why Ergonomics Cannot Make Interactive Devices to Be User-Friendly (k≥3). J Ergonomics 7:195.

Copyright: © 2017 Bouwhuis DG. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Top