Thursday, May 1, 2014

Do Curricula Correspond to Managerial Expectations? Core Competencies for Technical Communicators

Do Curricula Correspond to Managerial Expectations? Core Competencies for Technical Communicators
                                                                                  
Rainey, Kenneth T., Roy K. Turner, and David Dayton. "Do Curricula Correspond to Managerial Expectations? Core
Competences for Technical Communicators."Technical Communication 52.3 (2005): 323-52. Print.

Research question: The purpose of this study was to identify core competencies of technical communicators sought by technical communication managers.  Furthermore, the data collected aims to provide a direction for technical communication curriculums at the undergraduate level. 

Objects of Study: The primary (ideal) objects of study are current technical communication managers responsible for hiring technical communicators for their teams and/or organizations. The secondary (actual) objects of study were technical communication managers, however the sampling used for the study posed limitations – “We did not have access to a large group of technical communication managers from which we could draw a random sample, so we opted to obtain a convenience sample of those managers willing to respond to our invitation to take the survey” (325). Also, the 10 largest (by enrollment) undergraduate technical communication programs were examined in this study. 

Using a convenience sample causes sampling bias.  Additionally, the survey results yielded a low response rate – 67 responses out of 587 invitations sent to listserv contacts.  Therefore the results of this study are not generalizable.

Methodology: Several methods of data collection were used in this study.  A qualitative content analysis was used to examine 156 course descriptions from the 10 undergraduate programs chosen.  Eight non-industry specific core competency categories were derived from 17 focus groups conducted by STC in 1996.  A convenience sample of technical communication managers provided responses to a survey.  Prior to rating competencies on the survey (used a Likert scale); the participants of this study were asked to provide an “unbiased reflection” about skills they seek from job candidates.   Three in-person interviews were conducted during the 2004 STC Annual Conference as well.  It is evident from the multiple data collection methods used that triangulation is present in this study. 

Analysis: The responses from the survey rating responses and open-ended questions were grouped into ‘Personal Skills.’ ‘Personal Qualities’ and ‘Technical Skills.’ Specific skills emerged in each category based on the number of times mentioned in the responses.  Examples of these skills can be seen on Table 3 on page 327.  Mean responses were calculated from the Likert scale responses.  Mean rankings of 2.9 or above (on the 4-point scale) were considered the most important competencies and include: collaborative competencies, writing competencies, technical competencies, and self-activation/evaluation competencies.  A more thorough outline of the desired competencies (by rank) can be found on Table 4 on page 328).  Responses from the three in-depth interviews contributed similar finds to the survey results. 

As mentioned before, one problem with this study’s methodology is the convenience sample used to gather survey responses and conduct interviews.  The sample is not representative of all technical communication managers. 

Theories: Previous research conducted by Whiteside (2003) and Barbara Giammona contributed to this study.  Whiteside’s surveys indicated the discrepancy technical communicators had with “understanding their role within an organization and understanding day-to-day business operations” (332).  Furthermore, as part of her master’s program, Giammona conducted interviews of 25 technical communication leaders and found that core competencies for technical communicators are focused more on real-world personal skill sets than one particular set of tools (332).  Research by Allen and Benninghoff (2004) provide support in the importance of audience analysis, rhetorical analysis, collaboration, document design, etc. from 42 program survey responses.  The findings from this study align with previous research.


Findings & Generalizations: Results from this research study identified 4 of the most important competencies sought by technical communication managers: collaborating, writing, technical, and self-activation/evaluation. These competencies as well as current trends in the profession such as telecommuting, outsourcing and emerging technologies will be the guiding factors in revising technical communication programs.  The researchers of this study cite Whiteside’s suggestion to have regional and national panel made up of undergraduate directors to review this data and make recommendations for curricula changes.  While this research study uncovered significant findings, the sampling used poses a threat to its validity.  

Responses of American Readers to Visual Aspects of a Mid-Sized Japanese Company’s Annual Report: A Case Study

Responses of American Readers to Visual Aspects of a Mid-Sized Japanese Company’s Annual Report: A Case Study
Bethany Haberstroh

This article by Maitra and Goswami outlines a research case study examining American reader responses to translated corporate annual report for a Japanese company.  This document was translated from its original Japanese version as a means for American readers to assess the visual design aspects in document design.  Protocol analysis was used as a means to collect reader responses.  Flowers and Hayes define protocol as a “description of activities, ordered in time, in which a subject engages while performing as task” (qtd. in Lauer and Asher 26). 

For the purposes of this case study, Maitra and Goswami use Shriver’s definition of document design as “highly constructive activity in which building an adequate representation of a communication problem demands careful analysis of the unique features of the given rhetorical situation” (qtd. in Maitra and Goswami 198).  Text refers to the written or verbal elements while document refers to both the written and visual components of a work.  A literature review uncovered a framework for this case study.  In particular, the literature provided elements of both American and Japanese document design

Research questions:
1.       How would American readers/document designers respond to a translated document that reflects the assumptions and preferences of another culture?
2.       Do the responses of a purposively selected sample of American readers and reviewers represent the cultural sensitivity of the American document design process models?

Methodology:
Four sets of two readers each were selected to reader the translated document.  Three of the four sets of readers were familiar with information design while the fourth set represented potential users.  A pilot test was conducted using one of the three sets of readers in order to establish a framework for the actual study.  Participants were asked to rank their knowledge of Japanese culture as “a) good, b) working, or c) poor.  All participants ranked their knowledge as “poor,” however a lack of consistently exists between their levels of “poor” knowledge and could pose a problem in the reliability, although this is not mentioned in the article.  For example, four of the readers noted that they had no previous knowledge of Japanese culture, while three were aware of differences between American and Japanese writing.  The last reader had previous language knowledge but had not used it in several years.  In order to determine any negative connotations, disconnects between visuals and intended meaning, or misplacement of visuals in the translated document, the readers were asked to revise the translated document for an American audience. 

Data collection:
The following types of protocols were used for this research analysis:
·         Reader protocols prompting readers to think aloud and express their thoughts during the exercise
·         Co-discovery protocols to get over problems of any individual reader in vocalizing his/her responses
·         Active intervention protocols when certain sets of readers skipped over several pages or were not familiar with the process
Four categories of responses resulted from the pilot test and protocols which include comments on (1) text, (2) quality and placement of visuals, (3) text-visual integration, and (4) page layout and cover design.

Results/Conclusion:
Results from this case study indicated the following:
·         Readers did not consider aesthetics to be the most significant goal for document designers when they used visual elements
·         Ambiguity in visuals and text-visual integration posed active interference in the readers’ comprehension ability
·         Response to the visuals was shaped by the readers’ discourse communities

Essentially, the American readers found it difficult to make connections between the visuals and text due to the ambiguity and lack of captions or callouts.  American readers “assumed that all visuals were there for a reason, namely, to convey or clarify an information” (200).  Furthermore, “readers were frustrated mainly because the visual was not integrated with the adjacent text, which is what American document designer normally does” (201). 


This case study provided a basis for future research on how and to what extent American document designers need to adhere to cultural specific document design and discourse communities. Due to the nature of case studies, these results cannot be generalized for a larger population.  

Wednesday, April 30, 2014

Who Owns’ Electronic Texts? (Howard)

Howard, T. “Who Owns’  Electronic Texts?”

Background:

Historically, before the “Digital Millennium Copyright Act of 1998,” the penalty for violating copyright laws was being sued at worst. However, after this act was put in place, the repercussions of copyright infringement are much more severe. There is a possibility of facing statutory damages in a civil court as well as criminal penalties of up to $500,000 or up to five years in court and that is just for the first offense.

Additionally, with the “Copyright Term Extension Act” put in place in 1998, copyright is granted for seven years after the death of any author and then in the case of “works for hire,” 95 years from publication and 120 years from creation.

Many writers in the academic world he the notion that they have ownership over their writing and would like to believe that they have control over the type of capacity their writing is used. However, with trends moving toward new ways such as collaborative group work, hypertexts and multimedia presentations, the idealization that writers have control over accessibility of their work is facing new challenges. As a result, workplaces are finding themselves unprepared in dealing with these types of issues.

Scenario 1
Deciding whether or not you need to ask permission to use a famous photograph from a magazine, which will be tweaked to go on the cover of your company’s annual report.
Scenario 2
Deciding whether or not to install software on your computer when your company has access, but your company didn’t necessarily pay for you to have that access.
Scenario 3
Deciding whether or not to quote an unpublished reference from a group research exchange email.
Scenario 4
Email privacy at work between you and a co-worker of the opposite sex when you are aware that email conversations are being monitored and talked about amongst the IT department.
Scenario 5
Deciding whether or not it is appropriate for a professor to publish a hypertext that helps his or her students to find jobs.


Historical Overview
The invention of the printing press transformed book trade from expensive to cheap, easy and accessible. This ease of production and increase in competition brought about the incentive to protect a publisher’s copyright. Moving forward, copyright law doesn’t give authors and publishers the legal right to prevent the public from “fair use” of texts.

Copyrights In The Electronic Environment
It is important to understand the general principles, but not all principles are clear. Here is a breakdown of the above scenarios.

            Scenario 1
The photo is a reproduction of original work. Therefore, consent should be sought for use. When all is said and done, the document designer should obtain a copy of the original photo from the copyright holder.
            Scenario 2
This answer depends on licensing agreements, but in most cases companies have a specific number of licenses per agreement and if a download of software goes over that number, it is considered copyright infringement.
            Scenario 3
Currently, it would probably be legal to quote a short passage from the email message, although the ethics behind this type of practice is severely clouded.
            Scenario 4
It is not likely that this can be appealed through copyright law because it is not based on “natural unlimited property right.”
            Scenario 5

University resources were used to develop the HyperCard stack. Therefore, he or she can use this practice, although they should be prepared to share any sort of profits driven from implementation.

Writing In An Emerging Organization: An Ethnographic Study (Doheny-Farina)

Doheny-Farina, S. “Writing In An Emerging Organization: An Ethnographic Study.” Written Communication (1986): 158-185. PDF.

Two research questions:
1.      How do social and organizational contexts influence writing?
2.      How does writing influence those organizations?

Ideally, the primary object of student would be a larger and more varied culmination of organizations. However, in actuality, the secondary object of study happens to be Microware, Inc., which is a business that is one year after conception and has 25 employees. Because this organization only falls under the umbrella of all organizations, this study is not viably generalizable.

Theortical Assumptions
-          Rhetorical disclosure is situated in time and place
-          The rhetor conceives of these situational factors through interaction with people, events and objects
-          The researcher attempts to explore human interaction as it is evident in social and cultural settings
-          A microscopic investigation of important parts of a culture can elicit an understanding of the culture
-          Individuals act on the basis of the meanings that they attribute
-          Researchers seek diverse interpretations because any act can have multiple meanings
-          Researcher is the primary research instrument and must play a dual role

The Setting
-          Microware, Inc. is a company that is one year old with approximately 25 full time employees. The company was built to help spawn new high-technology companies

Data Collection
-          Visits to the company by the researcher occurred 3-5 times per week for approximately 8 months, with each visit lasting from 1-8 hours
-          Most data was collected during formal and informal staff meetings in offices, hallways, and open areas in two different buildings
-          The key informants were the top five executives, two middle managers, and two outside consultants
-          Data collected in four ways
o   Field notes – observational, theoretical, methodological
o   Tape-recorded meetings
o   Open-ended interviews
o   Discourse-based interviews
§  one version modeled after Odell, Goswami, Herrington
§  second version was adapted (collected drafts of writing from first to final)

Data Analysis
-          Reviewed data chronologically, then established analytical categories and properties of the categories
-          In general, describes the writing of an important company document, which was Microware’s 1983 Business Plan
-          This analysis explains the relationship between the writing of a Business Plan and the organizational context within which it was written.

The basis of information used as theory in this study is that our perceptions of writing, whether they be problems or crisis, are dominated by our knowledge of academic writing. Additionally, the theory is based on that this ideal binds the intellectual and social significance of writing.


One section that contains generalization specifically addresses the typicality of the results. For example, the article states that themes that affected rhetorical choices also influenced other aspects of the company’s operation. This is a generalization because it attempts to define how all people in the department relate one thing to another.

“How to Use Five Letterforms to Gauge a Typeface’s Personality: A Research-Driven Method” by Jo Mackiewicz

Introduction
In this article, Mackiewicz discusses the various ways that readers interpret the “personalities” evoked by various typefaces. She discusses the importance of typeface selection for technical writing, as using a typeface that does not match the mood one is trying to convey in a piece of writing can be detrimental to its interpretation by the audience. Mackiewicz points out that although many knowledgeable technical communicators and typeface designers acknowledge the personalities that typefaces possess, the literature is lacking an empirical, research-driven approach to gauging what typefaces evoke certain qualities. She attempts to fill a gap in the research with her study, which incorporates the use of surveys and other qualitative research methods.

Research Questions
Mackiewicz’s study attempts to answer the following research questions:
1. What personality attributes do various typefaces convey, according to study participants’ assessments?
2. Do typefaces assessed similarly for a particular attribute have any anatomical features (i.e. physical characteristics) in common?

Analyzing Anatomical Features
Mackiewicz sets up a survey that asks subjects to rate 15 different typefaces for their “professionalism” and “friendliness” on a Likert scale. Her subjects included 62 undergraduate students, some freshman and some upperclassmen. She selects the letters “Jagen” because they include particular anatomical features such as single vs. double story letters (a, g), obvious serifs or lack thereof (n), and a letter that extends below the baseline (J). She selects typefaces that range from easily recognizable to fairly uncommon in a variety of styles. After collecting the responses from the participants, she attempts to triangulate the data by comparing what participants said about the fonts with what typographers and technical writers have said. She found that certain anatomical features could be related to ratings of “friendly,” such as imperfect letters (see Bradley Hand), simplistic lines (Comic Sans), and roundness of letters (both the above fonts). In contrast, attributes associated with fonts rated highest for “professionalism” feature balanced terminals, moderate thick to thin transition, moderate weight, and moderate proportion.

Results

Mackiewicz acknowledges some pitfalls to her study, but claims that overall it offers technical writers a research-driven method for assessing what fonts are appropriate for certain documents. She encourages writers, based on the results of her study, to explore more interesting and out-of-the-box fonts with confidence. Her study offers an extension to personal preference or intuition method that is often used in font selection, and gives technical communicators a methodology for choosing typefaces to fit the tone of their documents. 

“Sentence Combining at the College Level: An Experimental Study” by Max Morenburg, Donald Daiker, and Andrew Kerek

This research study attempts to discover whether students who are instructed in the practice of sentence combining (referred to as SC) have more developed writing skills than students trained in traditional composition practices. Although earlier studies discussed the use of SC in elementary and junior high composition classes, this study is the first of its kind to analyze its place in college level courses.

Hypotheses
It was hypothesized that an experimental group, trained in SC, would score significantly higher than a reference group on:
            1. syntactic maturity factors as measured by standard quantitative criteria
2. overall writing quality as judged by a panel of experienced college teachers of college composition
            3. reading ability as measured by a standard reading test

Design and Procedures
The design of the study was a pretest posttest format. The researchers strictly controlled factors such as the subject selection, teachers, assignment variables, and environment. They selected twelve sections of Miami University’s freshman English course for the study, which consisted of 290 students. Six sections were the control group and six sections were the experimental group. They selected students from the lower 80% of the freshman class and randomly assigned to the 12 sections with 26 students each. The teachers were selected carefully, with six being faculty members and six being graduate assistants. The assignments for the study included eight compositions written at the same point throughout the semester. The first and last compositions were especially important as they served as the pre and posttests. The researchers attempted to use two comparable topics for the pretest and posttest compositions (246).  The control sections followed traditional teaching methods at the university, and the experimental sections made SC activities the exclusive content of the course.

Measurement
The pretest and posttest compositions from both the control and experimental groups were measured for syntactic maturity and writing quality by three different rating systems, including holistic, analytic, and forced choice. The raters were 28 teachers of college composition with varying levels of experience and education. The rating criteria used included ideas, supporting details, organization and coherence, voice, sentence structure, and diction and usage.

Results

The results of the study proved that college freshmen trained in SC scored significantly higher than control students on the factors of syntactic maturity and quality, indicating that the first two hypotheses were accepted. The third hypotheses regarding reading ability was rejected. Although SC cannot transform students’ writing overnight, the study proves that its use in the classroom is superior to traditional methods. 

Tuesday, April 29, 2014

Katz- The Ethic of Expediency

Carlisle Sargent | 3/26/2014

Title: The Ethic of Expediency: Classical Rhetoric, Technology, and the Holocaust
Author: Steven B. Katz

Overview: Katz’s article begins with a rhetorical analysis of a memo written by a Nazi official named Just during the Holocaust. The document, which acts as a jumping-off point for Katz’s argument, describes the need for certain structural and technical changes to be implemented for vehicles used for mobile extermination practices. Once Katz analyzes the document and identifies the obvious problems with its overall message (as well as its utter success as a technical document), he explores his concept of “the ethic of expediency”—including its meanings and consequences for modern Western society.

“In most deliberative rhetoric, the focus is on expediency, on technical criteria as a means to an end” (257). Katz is arguing that within Just’s document, as well as most technical document, the goal is to attain a technical goal at all costs. This creates an ethos of “objectivity, logic, and narrow focus”, and this ethos is exactly what writers and professional adopt from organizations they represent. In the case of the document Katz presents us, the writer simply adopted the ethos of the Nazi party to complete a job he was assigned. Katz argues that the ethic of expediency was used throughout the Holocaust as an “adequate moral basis for making decisions”, and that the same basis is used today.

“Ethos...is an essential link between deliberation and action” (259). Katz argues that technical writing almost always leads to an action of some kind. Indeed, while the Aristotelian concept of logos acts as “the consideration of the means necessary to act”, pathos and ethos are the motivation to act. According to Katz, epistemology leads to ethics, which easily could lead to an ethic of expediency.

This tension between rhetoric and ethics is evident in Aristotle’s Rhetoric, which Katz goes on to say “gives us a practical ethic for technical writing and deliberative discourse, an ethic based almost entirely on expediency” (261). From here, Katz discusses Aristotle’s philosophy on the ultimate goals of rhetoric in more detail, which are somewhat morphed when looking at our Western culture’s shift from a “polis”-centered existence to a more individualistic society. Katz argues that this evolution of Western culture and emphasis on deliberative discourse, capitalism, and individualism are all reasons that led (in part) to the Holocaust.

“Hitler understood—all too well—that his political program for world war and mass extermination would not be accepted without a moral foundation” (263). Katz writes that in Hitler’s speeches, conversations, and writings, the ethic of expediency was directly employed. Hitler knew that he would not be able to use violence until he convinced his people of the moral purpose of “a means to an end”. Expediency became the basis of “virtue” by two means: politics and technology. In political terms, Hitler argued for the practicality of overtaking Europe and allowing the Aryan race to rise to their full potential, which involved removing the impediment of lesser races. In technological terms, progress (scientific findings, new technologies) became its own reason to act. Katz argues that Hitler believed (and convinced the German people) that if an action was technologically correct (the Just memo) then it is morally right (mass extinction of innocent people). Katz goes on to argue that “technology is the embodiment of pure expediency” (266).


“We must always look at rhetoric in the context of historical, political, social, and economic conditions which govern the nature and use of rhetoric in culture” (269). Katz argues that expediency cannot be given free reign, and indeed, that modern society is still very much affected by the problematic view that technology (and therefore expediency) is infallible.

Usability and Format Design—Rubens and Rubens

The overall purpose of the two true experiment usability tests conducted by Rubens and Rubens (R&R) was to identify the differences in format and design that can affect a technical document’s ease of use in terms of retrieval time and task completion (213). 

Study 1: Manual Design and Performance
The purpose of Study 1 was to test three versions of the same manual to determine if one was easier to use. If this could be identified, then R&R would proceed with designing additional tests assessing features of the most usable manual (219). R&R pretested their research instruments—the original and modified manuals and the task-based questions—prior to their studies; however, very little information was given about the pretest itself (213). Two people participated, but neither person was described. Despite the lack of details about the pretest, R&R claimed it revealed strengths and weaknesses about the manuals.

Neither their hypothesis nor their research question(s) were stated explicitly; however, it can be inferred based on their literature review and research design that they assumed changing the format and design of the original manual will make it a more usable document based on time-to-productivity and task support (215-19). This is a cause-and-effect relationship.

The original and modified manual formats were described in detail (214-17). The type of manual and their content were not identified, but the withholding of this information can be justified since the object of study is formatting not content. Subjects were selected from unspecified classes at an unnamed college (220). 87 subjects participated in Study 1, meeting the 10:1 subject-to-variable ratio. There were four variables in total. Two dependent variables included: search and retrieval time (interval) and performance scores (interval). Two independent variables included: manual type (nominal) and question type (nominal). An equal number of manuals was randomly distributed to the subjects for data collection. Those who received Manual B, the original, were part of the control group; those with Manuals A and C were the treatment groups.

Data was collected using a 10-question task-based test. Questions varied in difficulty from simple to complex as defined by R&R. As each subject took the test with his or her assigned manual, the time it took to complete a task was measured and a score was given for each correct answer. R&R did a correlational analysis of their interval and nominal data by comparing manual types to performance scores, manual types to question types, and manual types to time (221-23). Unfortunately, R&R did not provide variance or standard deviation. Only means were provided for all variables. Statistical significance was only calculated for question type to time and performance score, but the questions were not the focus of the study.

R&R’s generalization from their analysis: formatting techniques do not always create usable manuals, but they can influence performance.

Study 2: Reference Strategies and Performance
The purpose of Study 2 was to determine how reference aids affect retrieval time and which type contributes to ease of use. R&R hypothesized the formatting of the reference aid and manual will be responsible for quicker retrieval times, however this is not explicitly stated again (224). 

The original manual and Manual A from Study 1 were focused on because their performance scores were higher. The same task-based questions were reused. Three levels of reference aids with two variations were the major addition to the second study, however, it was not indicated whether these instruments were pretested (226). Subjects were selected from unspecified classes at an unnamed college again, but the selection process was not specified (227). This time, 104 subjects participated whose demographic makeup was slightly different from Study 1’s but still met the 10:1 subject-to-variable ratio. The same dependent and independent variables were used from Study 1 with the addition of one more independent variable: reference strategy (nominal). Manuals and reference aids were distributed randomly.

Again, data was collected using the same 10-question task-based test to measure time and score (227). Again, data was correlated between the dependent and independent variables, and again, only the means and select statistical significance for question type to time and performance score were provided.

R&R’s generalization from their analysis: a variety of simple and task-oriented retrieval aids is most effective for manual design to improve performance.

Problems with the Research Design
  1. It is unclear whether subject selection was randomized. Randomization is integral to a true experiment’s internal validity.
  2. Hypotheses are implied leaving R&R’s expectations of the results of treatments up to interpretation.
  3. Report of the data analysis seems selective. Variance and standard deviation were absent. Significance seemed irrelevant.
  4. It is unclear whether Study 2 had a control group. 

Rubens, Phillip and Brenda Knowles Rubens. “Usability Testing and Format Design.” Effective Documentation: What We Have Learned From Research. Ed. Stephen Doheny-Farina. Cambridge: MIT Press. 1988. 213-233.

Revising Functional Documents: The Scenario Principle—Flower, Hayes, and Swarts

Flower, Hayes, and Swarts attempt to answer two research questions:
  1. What would a reader-based revision of a Federal regulation look like? What do readers need?
  2. What kinds of revisions do expert writers make when they revise a Federal regulation? How do they meet the readers’ needs? (42)
The authors conduct two qualitative descriptive case studies analyzing reading and revising strategies applied to a piece of Federal regulation, a functional document that people read “not merely to learn information, but in order to do something” (41). Both studies involve the scenario principle which states functional documents “should be structured around a human agent performing actions in a particularized situation” (42, 54).

Study 1: Analyzing the Needs of Readers
The primary object of study is the reader and what is needed in order to read a Federal regulation governing the Small Business and Capital Ownership Development Program. Protocol analysis is used to observe the objects of study, a method that collects tape-recorded transcripts of subjects reading aloud and paraphrasing meaning as they read. By using protocol analysis, the researchers are able to observe instances in the regulation where readers must pause and revise what is read in order to understand it. 
Hypothesis: “[I]f there were consistent patterns to our readers’ ‘revisions,’ these might suggest what sort of revisions the writers of these regulations should be making” (42).
Subject Selection: A representative sample of small business people who probably could not afford legal interpretation of the regulation. Made up of three people.
Coding System: Readers’ revisionary statements were coded into metastatements (unrelated comments) and content-related statements (comments interpreting the meaning of the regulation OR comments translating the regulation into an understandable form). Statements were further classified as structural, retrieval, and/or scenario statements. Clauses were the unit of measurement (simpler than T-Unit Analysis).
Scenario statements were the most frequent reader revision: “in trying to understand the text they frequently recoded it in order to form a concrete story or event by creating a condition/action sequence or by supplying agents and action” (45).
One questionable aspect of the method: It is not explicitly clear who or how many did the coding.
Result: The frequency of readers’ scenario statement revisions suggests the Federal regulation needs to be restructured around the readers’ search for answers in order to be functional.

Study 2: Analyzing the Nature of Writers’ Revisions
The primary objects of study are clauses and headings containing human focus in old and revised regulations.
Hypothesis: “[P]ublished revisions made by expert regulation writers reflect the heavy use of scenarios” (49).
Subject Selection: The regulation from Study 1 represents the old which the researchers knew is difficult to read. The Health Education Assistance Loan regulation represents the revised which is praised as easy to read.
Another questionable aspect of the method: Are these representative samples? What is difficult and easy?
Data Analysis: For the analysis of clauses, comparable segments were selected and counted for clauses containing human-centered discussions. For analysis of headers, four readers were instructed to identify old, concept-centered, definition headers and revised, human-centered scenario headers.
Yet another questionable aspect: No mention is made of who the four header readers are or how they were selected.
Result: “[E]xpert government writers and revisers seem to provide that human focus throughout their prose, not only in their sentences but even in their headings” (52).

Practical Revision Strategies
The researchers name the scenario principle a practical principle that is flexible in application to make reader-based, human-centered revisions to all levels of a functional document. Three levels of application include:
Top level: Organize information around 1. actions people take rather than definitions, 2. answering the reader’s questions, and 3. the reader’s need for specific information (54).
Local level: Use 1. examples and cues, 2. concrete situations and subsequent actions people take, and 3. operational definitions (55).
Grammatical level: Write sentences with 1) agents and actions and 2) human agents (56).
They are wary of their generalization of the application of the scenario principle to functional documents. They mention the notion needs further “detailed linguistic analysis” and it is only a “working hypothesis” (54, 56).

Conclusion

Flowers, Hayes, and Swarts conclude the article with a hypothesis for further research based on their results: “writers and revisers must find ways to create a reader-based structure of information in a text designed around its function, and around the comprehension strategies readers bring to it” (57). 

Flower, Linda, John R. Hayes, and  Heidi Swarts. “Revising Functional Documents: The Scenario Principle.” New Essays in Technical and Scientific Communication: Research, Theory, Practice. Farmingdale NY: Baywood, 1983. 41-58.

Saturday, April 26, 2014

"From Design to Use: The Roles of Communication Specialists on Product Design Teams," by Steve Doheny-Farina

Steve Doheny-Farina’s “From Design to Use: The Roles of Communication Specialists on Product Design Teams,” deals with the roles technical writers and communication specialists occupy on most product design teams. Doheny-Farina begins the article with several examples of sleek and shiny technology with all the capabilities and functionality in the world, but that are virtually unusable and, thus, not worth the steep learning curve investment for its users. Doheny-Farina traces this problem to the unsuccessful integration of users being involved at every step of the design process, and says it’s very easy for developers, product engineers, and programmers to get stuck in their own heads, and design technical and information systems which appeal only to likeminded people, rather than laypeople and  those likeliest to actually be using the products. In doing so, Doheny-Farina introduces excerpts from Don Norman’s seminal work The Design of Everyday Things, which says that the burden shouldn’t be placed on users in learning how to navigate and use complex, multi-faceted software, but instead on the developers and designers that create the product. This fundamental gap in knowledge and understanding lies in  the developers and designers simply not knowing the users well enough, a problem which Doheny-Farina advises would be mitigated greatly by better and more active integration of technical writers and communication specialists directly into the product development cycle. Where traditionally technical writers have been treated with little regard, with their work seen as less vital and more supplemental to product design than the engineers and developers who actually design and manufacture the product, more active integration of writers, from start to finish, would ensure better communication across the board and ensure the user’s needs are always understood and actively spoken for.
Doheny-Farina then introduces two case studies to better illustrate his point, both featuring technical writers with more access and integration with the product teams than the norm, both for a variety of reasons. The first case study mentioned, ABC Company, featured a company with impending deadlines for a product that was excessively “buggy” and ill-functioning, and thus, writers were integrated into the development team in order to foster better and more effective collaboration, in a sort of “all hands on deck” approach. The two writers, Corrie and Walter, saw their responsibilities within the team grow to include: writing and contributing to the design specs for the product, synthesizing and bringing in outside information for developers and engineers, and coordinating with end-users on overall usability and ease-of-use of the product. This case was unusual in that it put a very high burden on both technical writers, they did however manage to learn a lot from the experience, and provide more value than technical writers typically would under normal circumstances. The other case study Doheny-Farina mentions, dealing with XYZ Corporation, featured a corporation that attempted to instill a more gradual, grass-roots integration of technical writers and information specialists into product development. In effect, co-locating writers and communication specialists among designers and developers caused two clear distinctions and specializations to emerge for writers to occupy. Doheny-Farina calls one distinction the usability writer, or usability advocates whose main focus is along the interface of the product, and the kinds of front-ends interactions that end-users are likely to encounter. The second distinction, known as the design writer, served as technical specialists in a given software or technical aspect, and were tasked with, among other things, acting as facilitator to the rest of the development team, and keeping everyone informed and updated of each other’s progress. Design writers were also able to provide a more general overview of the product, in order to better assist developers and engineers who may be more focused on their own, more narrow specialties. Perceiving the ever-changing field of technical and professional communication that students are sure to encounter in the workforce, Doheny-Farina then advocates for more active integration and co-location of writers among the design team in the workforce, and more multi-disciplinary curriculums in graduate technical writing programs, drawing from fields like rhetoric, visual design, communication and usability research, product design, and science and technology.

Thursday, April 24, 2014

Carroll et al.: “The Minimal Manual” Precise


About the Study
Carroll et al. developed a Minimal Manual for a word processing program in order to address problems in the training process that occur with standard self-instruction manual. They applied Minimalist training principles in the design of the manual and tested the result in two experiments. Based on these experiments, the researchers concluded that the Minimal Manual helped users perform better and more efficiently than the standard self-instruction manual.

Designing the Minimal Manual
Principles of the Minimal training model: Carroll et al. explain that their “strategy in training design was to accommodate, indeed to try to capitalize on, manifest learning styles and strategies” (74). Based on prior research on the subject, they outline the principles of the Minimal training model. These principles include focusing on real world activities and tasks, cutting out information that users often overlook, supporting error recognition and recovery, and providing “guided exploration” (75-77).

Design Process: Carroll et al. argue that Minimalist design is similar to other aspects of user interface design in that it is “developed iteratively: designed, empirically evaluated, and then redesigned” (77). They explain how the principles of the Minimal training model influenced their design (77-81), as well as the procedures they used to test and revise their document prior to the two experiments (81-84).

Experiment 1
Purpose: The purpose of this experiment was to “contrast a commercially developed standard self-instruction manual (SS) with the experimentally developed Minimal Manual (MM) in an office-like environment” (84).

Participants: Nineteen subjects were chosen by an outside agency. Ten of them participated in the MM condition while the other nine participated in the SS condition (85). These subjects were screened to have experience with typical office work but little experience with word processing software. None of them had prior experience with the software used in the study (86).

Procedure: This experiment was a “between-subjects contrast of the independent variable of manual (MM or SS)” (84). Within each condition, subjects were placed into groups of two or three in a simulated office environment. They were given the training manual for their condition and asked to finish prerequisite training and perform a related task. This process was repeated eight times and covered different material. As the experiment was supposed to mimic a real environment, subjects were allowed to talk to each other, utilize the system library, and call a support hotline in addition to referring to their manuals. The researchers collected two dependent measures: the time to complete the training and performance tasks and the performance on the eight performance tasks (86-88).

Results and Discussion: The researchers discuss a number of statistically significant results regarding time and task performance. They found that users of the Minimal Manual accomplished more tasks in less time than their SS counterparts. The researchers conclude that these results are strong indicators that the Minimal Manual has a better design than the standard self-instruction manual, but they could not determine why it was better from this experiment.

Experiment 2
Purpose: The purpose of this experiment was to “the contrast between a commercially developed, standard self- instruction manual (SS) and the experimentally developed Minimal Manual (MM).” (89).

Participants: Thirty-two subjects were chosen by an outside agency. Eight of them participated in each condition (MM, SS, “learn while doing” (LWD), and “learn by the book” (LBB)) (89). These subjects were screened for the same qualities as the first experiment. Three participants were replaced due to frustration (91).

Procedure: The experiment had a “2x2 between-subjects design” (90). The LWD participants received five hours to perform tasks, while the LBB participants received three hours to “use the manual in order to learn about the system” (89-90). They separately received two more hours to perform other tasks. They were encouraged to use the manual to perform tasks, but the entire library was available. The researcher sat with each participant, and the participants were encouraged to think aloud as they completed six tasks. Researchers measured time and performance as in Experiment 1, but also measured attention and effort involved (92). They achieved this by coding participants’ actions and tabulating errors.

Results and Discussion: This experiment also yielded statistically significant results. Measures that were used to analyze learning and targeted errors and skills indicated that MM subjects “performed better and more efficiently” (97).

General Discussion
When comparing the two experiments, the researchers conclude that the two “converge on the conclusion that the Minimal Manual is substantially and reliably superior to the commercial self-instruction manual” (99). They point out that this study can not be used to generalize to “other areas of educational technology” (99).

Sheehy: “The Social Life of an Essay: Standardizing Forces in Writing” Precise



About the Study
Sheehy focuses on an eight-week period of time in which she acted as a participant observer in a seventh grade classroom. During those eight-weeks, the class worked on a “Building Project,” which culminated in a speech based on the five-paragraph essay that was given to the school board by one of the students. Sheehy uses ethnographic research methods to examine standardization in composition studies.

Theories that Inform the Research
Standardization: Sheehy suggests that “standardization practices occur in social life and occurred prior to this testing era” (336). She writes that forms and standardization can be useful in the “game of social life,” citing Bourdieu, Milroy & Milroy, and Shuman (336). In terms of standardization in the essay, Sheehy first looks at Farr’s theory of decontextualization, but refers to Shuman and Miller’s discussions of recontextualization and genres to provide more insight (337-338). Based on Shuman and Miller, she concludes that “standards and forms cannot be fixed” (338).

Dimensions of Standardization: Sheehy bases her dimensions of standardization for this study – text as a trajectory of exchange, articulation of relations, and centrifugal and centripetal forces – on Kamberelis and de le Luna’s “three coconstructed dimensions” (338). In terms of text as a trajectory of exchange, she discusses how the essay is a commodity to be produced, distributed, and consumed, citing Wells, Appadurai, and Fairclough. She then explains Gramsci’s theory of articulation and rearticulation through Grossberg’s more contemporary analysis: “‘Articulation is the construction of one set of relations out of another…Rearticulation occurs through constant struggle to reposition practices within a shifting field of forces’” (340). She also discusses Gramsci’s theory of hegemony and Grossberg’s demonstration of it as an event, or a practice in which “reality is transformed” and is situated in a specific context (341-342). Finally, she outlines Bakhtin’s theory of heteroglossia, in which centripetal forces and centrifugal forces are used to standardize and stratify language, respectively (342).

Methodology
Participants and Data Collection: Sheehy was a participant observer in a seventh grade classroom. She helped plan and teach the eight-week project, which was composed of five phases. Participants included all of the 30 students and teachers in the classroom during the second 90-minute block, but she focused on two small groups based on her “rapport with some of the students” (345). Data collection methods included observation, audio recordings, field notes, interviews, community surveys, and Focus Group B’s speech drafts.

Data Analysis: Sheehy outlines two levels of analysis: “Charting production, consumption, and distribution as articulation of the Building Project” and “Centripetal and centrifugal forces in writing” (346, 353). In the first level, she explains how she coded her data and filled out a Production Trajectory Map based on it (346). The map reveals that “the consumption and distribution columns are replete with difference of opinion; yet the production column (the speech itself) is strikingly unified” (353). From there, she outlines and examines the tensions evident in the map using Bakhtin’s theory of centripetal (unifying) and centrifugal (stratifying) forces (353).

Findings
Sheehy writes that the teacher’s graphic organizer, Bakhtin’s idea of genre memory, or the knowledge of a genre based on the understanding of similar genres, and the teacher’s comments were the three most unifying forces at work while the students wrote their speeches (357). She points out that the tensions that she outlined led to “strategic use of dearticulation/rearticulation” (360). By delinking ideas that they learned in class and relinking those ideas in their speeches, Sheehy writes that the students were able to “effect cohesion” with strategies such as emotional appeal, veiling contradictions, and interdiscursive alliances (360). She concludes, “[the speech] was a rearticulation of many texts and relationships, which changed constantly as ideas were produced and consumed in this example of a game of social life” (366).

Limitations
Sheehy explains three limitations of her study. First is that her methodology did not find many connections between the situation in the classroom and the history of teaching essay writing in schools (366). The second limitation is that she framed the speech as an essay, which is not the case. She concedes that the forces that helped to create a successful outcome for the speech may not have worked if they were not addressing the school board (367). Finally, she explains that her research is only on one of the speeches that the class wrote, which suggests that they were all “produced similarly” (367). She clarifies that they were not all produced similarly, and suggests a cross-essay analysis for further research.