At the time of this writing, the following preliminary results have been gathered for my study:
Of the twelve people who have currently completed the survey, seven were white females, two white males, two black males and one black female.
Preliminary data:
Writer 1
Race--25% correct and most 36% not sure at all
Age--18% correct and most 42% not sure at all
Gender--92% correct, and most 42% not sure at all
Writer 2
Race-- 8% correct and 58% not sure at all
Age--33% correct and 50% pretty sure
Gender--92% incorrect, and most 33% were pretty sure
Writer 3
Race--75% correct and most 42% not sure at all
Gender--75% correct and most 42% kind of sure
Age--33% correct and most 42% not sure at all
Writer 4
Race--75% correct and 58% not sure at all
Age--42% correct and most 42% kind of sure
Gender--67% incorrect and 33% pretty sure
Writer 5
Race--58% correct and 58% not sure at all
Gender--92% correct and 8% positive
Age--67% incorrect and 42% not sure at all
Writer 6
Race--58% correct but most 58% not sure at all
Age--75% correct but 58% not sure at all
Gender--75% incorrect and most 42% kind of sure
Writer 7
Race--67% incorrect and 67% not sure at all
Age--50% correct and 33% not sure at all, kind of sure and pretty sure
Gender--75% correct and 50% kind of sure
Writer 8
Race--73% correct but 55% were sure at all
Age--82% incorrect but 64% pretty sure
Gender--55% correct and 36% pretty sure
Writer 9
Race--73% correct but 55% not sure at all
Age—Most, 45% correct and 45% were pretty sure
Gender--73% incorrect and most 36% pretty sure
Writer 10
Race--Most, 50% correct but 50% (most) not sure at all
Age--58% incorrect and 42% pretty sure
Gender--100% correct and 25% positive
Participants were most accurate when guessing the gender of the writers. Despite the accuracy of their guesses, participants were still unsure of their guesses. Interestingly, two of the three male writers were incorrectly guessed as female and the participants were much more confident in their guesses. Participants were almost as accurate when guessing the race of the writers, although they were much less sure of their guesses. The participants were least accurate in guessing the age of the writers, but described more confidence in their guesses.
These preliminary results align with previous research. DeAndrea, D; Shaw, A. and Levine, T. (2010) found no significant differences in the way different races represent themselves on their Facebook pages. Like the DeAndrea et al. study, the writers chosen were solicited from the friends of one Facebook profile, suggesting that there may be some similarity in the personality, writing style and other factors among the writers and that these similarities override any racial differences. The seemingly homogenous group chosen for the study may contribute to the difficulty in discerning race through the text provided. The results of the participants’ attempts to guess the race of the writers, although accurate, revealed a low level of confidence in their guesses. These results seem to challenge Szpara, M.Y. & Wylie, E.C. (2007) findings that all the African American participants had identifiable features of African American English. The uncertainty of the participants when guessing the writers’ races could be contributed to a lack of obvious African American English features and other stereotypically ethnic writing features.
Two mini-interviews were conducted while the participants completed the survey. Attempting to guess the race of writer 4, a participant asked “What is Sons of Anarchy?,” referencing the post provided by the writer, and asked if there characters on the show were white. This question suggested that this participant, a 33 year old black male, was relying on context to determine the race of the writer and possibly age and gender as well. During the second interview, a 50 year old black female revealed difficulty and frustration with guessing accurately. “I’m not good at this kind of thing,” she stated, suggesting that the demographic information revealed by the posts was not obvious.
Monday, November 22, 2010
Wednesday, November 17, 2010
Facebook, digital writing and cultural identity
In the study Online Language: The role of culture in self-expression and self-construal on Facebook,the authors examined Facebook pages as a method of identity construction and self expression. Basing their hypotheses upon the results of similar studies, the researchers used the text from the social networking site to explore how the influence of culture on self-construal and self-expression is reflected in language, and to determine if there are differences between Caucasian, African-American and ethnic Asian users.
120 Facebook pages were examined with 60 belonging to males and 60 belonging to females, and with 40 belonging to Caucasians, 40 belonging to Asians,and 40 belonging to African-Americans. All participants were solicited through one Facebook profile belonging to a student at a Midwestern university. The pages were coded by trained research assistants using the following categories: physical description, social affiliation, internal expression, immediate situation,other's judgment, possession and miscellaneous.
The following hypotheses were posed and examined using the data:
African Americans were hypothesized to have a higher proportion of internal expressions, followed by Caucasians and finally Asians. The study concluded that there was no significant difference in internal expressions used by these groups. Asians were expected to have a higher level of social affiliation expressed on their pages, followed by Caucasians and finally African Americans. The study concluded that there were no significant differences in social affiliation between these groups. Contrary to the initial hypothesis, African Americans were found to have a significantly higher percentage of words indicating social interaction than Caucasians and Asians. Although not hypothesized, African Americans were found to have significantly more internalized attributes than Caucasians or Asians, who did not significantly differ from one another.
I would recommend this study to researchers studying race and digital writing. Unlike many studies on race and writing, this study focuses less on linguistic characteristics of writing that may distingush a writer's race and focuses on the content of the text composed and how that may or may not differ between races. Additionally, although the participants differed in race, the study chose a population that would assumably have similar social and personal characteristics by virtue of being related through mutual friends and being related through affiliation with the same university. I believe the selection of this population helps to validate the results, providing a measure of control for social factors such as level of education and peer association.
DeAndrea,D.; Shaw,A.and Levine,T.(2010). Online Language:The Role of Culture in Self-Expression and Self-Construal on Facebook. Journal of Language and Social Psychology. 29:425.
120 Facebook pages were examined with 60 belonging to males and 60 belonging to females, and with 40 belonging to Caucasians, 40 belonging to Asians,and 40 belonging to African-Americans. All participants were solicited through one Facebook profile belonging to a student at a Midwestern university. The pages were coded by trained research assistants using the following categories: physical description, social affiliation, internal expression, immediate situation,other's judgment, possession and miscellaneous.
The following hypotheses were posed and examined using the data:
African Americans were hypothesized to have a higher proportion of internal expressions, followed by Caucasians and finally Asians. The study concluded that there was no significant difference in internal expressions used by these groups. Asians were expected to have a higher level of social affiliation expressed on their pages, followed by Caucasians and finally African Americans. The study concluded that there were no significant differences in social affiliation between these groups. Contrary to the initial hypothesis, African Americans were found to have a significantly higher percentage of words indicating social interaction than Caucasians and Asians. Although not hypothesized, African Americans were found to have significantly more internalized attributes than Caucasians or Asians, who did not significantly differ from one another.
I would recommend this study to researchers studying race and digital writing. Unlike many studies on race and writing, this study focuses less on linguistic characteristics of writing that may distingush a writer's race and focuses on the content of the text composed and how that may or may not differ between races. Additionally, although the participants differed in race, the study chose a population that would assumably have similar social and personal characteristics by virtue of being related through mutual friends and being related through affiliation with the same university. I believe the selection of this population helps to validate the results, providing a measure of control for social factors such as level of education and peer association.
DeAndrea,D.; Shaw,A.and Levine,T.(2010). Online Language:The Role of Culture in Self-Expression and Self-Construal on Facebook. Journal of Language and Social Psychology. 29:425.
Tuesday, November 9, 2010
Gender Performances Online
Herring, S. C., & Martinson, A. (2004). Assessing gender authenticity in computer-mediated language use: Evidence from an identity game. Journal of Language and Social Psychology,23, 424–446.
Assessing Gender Authenticity in Computer-Mediated Language Use : Evidence From an Identity Game
In the study Assessing Gender Authenticity in Computer-Mediated Language Use : Evidence From an Identity Game, the authors Susan C. Herring and Anna Martinson, analyze how gender is represented by digital writers and how gender is perceived by readers online. Using The Turing Game, a publicly available chat environment that supports spontaneous, synchronous text chat for the purpose of “To Tell the Truth”-style identity games; the most popular of which are games about gender identity. In these gender identity games, users attempt to deceptively represent themselves as a gender opposite their own and judges attempt to guess the users’ correct gender, using only the tools of language allowed in the purely text based environment. The Turing Game can be found at the http://www.cc.gatech.edu/elc/turing/info2_5.html).
Using publicly available data from the site, the researchers analyzed the game logs, judges' ratings and debriefing chats to ascertain the users’ attempted gender performance, the judges’ assessment of authenticity of the gender performance and the users’ actual genders. Through a content analysis of this data, following research questions were considered:
• How do contestants in gender identity games present themselves? Are there differences between real-life males and real-life females, between same-sex or cross sex performances, and/or between male and female identity games?
• What aspect(s) of contestants’ self-presentation do judges attend when assessing gender authenticity? Which aspects are most important in judges’ decisions?
• How successful are contestants’ self-presentation strategies? How successful are the judges’ assessment strategies in terms of their respective goals?
The game logs revealed that contestants “produce stereotypical content when attempting to pass as the opposite gender, as well as giving off stylistic cues to their real life gender (Herring and Martinson 2004).” In turn, the judges based their assessments of gender on responses to stereotypically gendered questions as the primary strategy to assess gender, leading most often to them incorrectly guessing the users’ genders. This finding is in opposition to previous evidence that people assess gender online based solely on linguistic style. Stylistic features such as message length and word choice often reflected the users’ true gender, aligning with previous studies that concluded that writing styles are often highly gendered. The study concludes that “conventionally gendered ways of communicating are deeply embedded in people’s social identities, and that differences tend to persist even in conscious attempts to manipulate gendered language, regardless of whether others attend to them (Herring and Martinson 2004).”
As a resource to scholars studying digital writing and identity, I would recommend this article as a secondary or tertiary resource, but not a primary. Although I found the research questions well considered, I felt that the methodology used of analyzing data only from the site, limited the ability to ascertain how people assess gender. I feel that triangulation in methods would be beneficial here. I would have liked to see interviews with the judges, allowing them to elaborate on the factors that influenced their attempts to guess a users’ gender.
Assessing Gender Authenticity in Computer-Mediated Language Use : Evidence From an Identity Game
In the study Assessing Gender Authenticity in Computer-Mediated Language Use : Evidence From an Identity Game, the authors Susan C. Herring and Anna Martinson, analyze how gender is represented by digital writers and how gender is perceived by readers online. Using The Turing Game, a publicly available chat environment that supports spontaneous, synchronous text chat for the purpose of “To Tell the Truth”-style identity games; the most popular of which are games about gender identity. In these gender identity games, users attempt to deceptively represent themselves as a gender opposite their own and judges attempt to guess the users’ correct gender, using only the tools of language allowed in the purely text based environment. The Turing Game can be found at the http://www.cc.gatech.edu/elc/turing/info2_5.html).
Using publicly available data from the site, the researchers analyzed the game logs, judges' ratings and debriefing chats to ascertain the users’ attempted gender performance, the judges’ assessment of authenticity of the gender performance and the users’ actual genders. Through a content analysis of this data, following research questions were considered:
• How do contestants in gender identity games present themselves? Are there differences between real-life males and real-life females, between same-sex or cross sex performances, and/or between male and female identity games?
• What aspect(s) of contestants’ self-presentation do judges attend when assessing gender authenticity? Which aspects are most important in judges’ decisions?
• How successful are contestants’ self-presentation strategies? How successful are the judges’ assessment strategies in terms of their respective goals?
The game logs revealed that contestants “produce stereotypical content when attempting to pass as the opposite gender, as well as giving off stylistic cues to their real life gender (Herring and Martinson 2004).” In turn, the judges based their assessments of gender on responses to stereotypically gendered questions as the primary strategy to assess gender, leading most often to them incorrectly guessing the users’ genders. This finding is in opposition to previous evidence that people assess gender online based solely on linguistic style. Stylistic features such as message length and word choice often reflected the users’ true gender, aligning with previous studies that concluded that writing styles are often highly gendered. The study concludes that “conventionally gendered ways of communicating are deeply embedded in people’s social identities, and that differences tend to persist even in conscious attempts to manipulate gendered language, regardless of whether others attend to them (Herring and Martinson 2004).”
As a resource to scholars studying digital writing and identity, I would recommend this article as a secondary or tertiary resource, but not a primary. Although I found the research questions well considered, I felt that the methodology used of analyzing data only from the site, limited the ability to ascertain how people assess gender. I feel that triangulation in methods would be beneficial here. I would have liked to see interviews with the judges, allowing them to elaborate on the factors that influenced their attempts to guess a users’ gender.
Tuesday, November 2, 2010
Differential performance, standardized testing and race
Writing Differences in Teacher Performance Assessments: An Investigation of African American Language and Edited American English
This study sought to identify the source of racial disparity in test scores for the National Board for Professional Teaching Standards (NBPTS) portfolio assignment for the Middle Childhood/Generalist Certificate. The authors of the study begin with the hypothesis that the use of features African American Language (AAL) and Southeastern White English (SWE) may contribute to lower scores on this test. The researchers examined thirty-two written portfolio entries with 18 from African American candidates and 14 from European American candidates. These entries were coded by linguistic experts for grammatical, lexical, and discourse features and most notably for features of AAL, SWE and Speech Code Errors (SCE). The race of the writer was kept confidential to the coders.
The results revealed that the use of AAL features was found among African American candidates across all score levels and that African American candidates used both AAL and SCE more often than the European American candidates. Interestingly though, the study found that the use of AAL and SCE was not associated with high or low scores, lending support to the effectiveness of the testing body’s bias-reduction training. The researchers conclude that although this study did not reveal a bias toward non-standard English users, African American testers still received lower test scores than European Americans. Of the participants in this research, there was an approximately half-point difference in the mean scores between African American and European American participants; a finding that is consistent with results from previous participant cohorts of the same test. The researchers hypothesize that “it is possible that some of this differential performance could be due to construct-irrelevant effect of the writing features used (M.Y. Szpara & Wylie, 2007)
Although the study was inconclusive regarding whether linguistic bias against non-standard English use in standardized tests, I found this study to be a useful beginning for discourse regarding race, writing and differential performance. For my research, I find this study to be interesting because it shows that linguistic experts are very adept at identifying race in writing but questions whether the common person can do so as easily. It also questions whether having the ability to distinguish racial identity through writing creates an inherent bias against writers who are identified as a minority. As I will asking the participants of my study to attempt to identify race, age and gender through casual writing, I find this study to be a great point of comparison. As the expert linguistics in this study coded AAL and SWE features, I will be asking non-experts to identify codes that may denote the writer’s demographic categories. The comparison between the ability of experts to identify demographic information through writing and that of non-experts, I believe will determine new areas of discourse in the subjects of digital writing, race and linguistics.
Szpara, M. Y., & Wylie, E. C. (2007). Writing differences in teacher performance assessments: An investigation of African American language and edited American English. Applied Linguistics, 29(2), 244-266.
This study sought to identify the source of racial disparity in test scores for the National Board for Professional Teaching Standards (NBPTS) portfolio assignment for the Middle Childhood/Generalist Certificate. The authors of the study begin with the hypothesis that the use of features African American Language (AAL) and Southeastern White English (SWE) may contribute to lower scores on this test. The researchers examined thirty-two written portfolio entries with 18 from African American candidates and 14 from European American candidates. These entries were coded by linguistic experts for grammatical, lexical, and discourse features and most notably for features of AAL, SWE and Speech Code Errors (SCE). The race of the writer was kept confidential to the coders.
The results revealed that the use of AAL features was found among African American candidates across all score levels and that African American candidates used both AAL and SCE more often than the European American candidates. Interestingly though, the study found that the use of AAL and SCE was not associated with high or low scores, lending support to the effectiveness of the testing body’s bias-reduction training. The researchers conclude that although this study did not reveal a bias toward non-standard English users, African American testers still received lower test scores than European Americans. Of the participants in this research, there was an approximately half-point difference in the mean scores between African American and European American participants; a finding that is consistent with results from previous participant cohorts of the same test. The researchers hypothesize that “it is possible that some of this differential performance could be due to construct-irrelevant effect of the writing features used (M.Y. Szpara & Wylie, 2007)
Although the study was inconclusive regarding whether linguistic bias against non-standard English use in standardized tests, I found this study to be a useful beginning for discourse regarding race, writing and differential performance. For my research, I find this study to be interesting because it shows that linguistic experts are very adept at identifying race in writing but questions whether the common person can do so as easily. It also questions whether having the ability to distinguish racial identity through writing creates an inherent bias against writers who are identified as a minority. As I will asking the participants of my study to attempt to identify race, age and gender through casual writing, I find this study to be a great point of comparison. As the expert linguistics in this study coded AAL and SWE features, I will be asking non-experts to identify codes that may denote the writer’s demographic categories. The comparison between the ability of experts to identify demographic information through writing and that of non-experts, I believe will determine new areas of discourse in the subjects of digital writing, race and linguistics.
Szpara, M. Y., & Wylie, E. C. (2007). Writing differences in teacher performance assessments: An investigation of African American language and edited American English. Applied Linguistics, 29(2), 244-266.
Subscribe to:
Posts (Atom)