Technology
Brucks, Melanie and Jonathan Levav (forthcoming) “How the Kinesthetic Properties of a Response Scale Affect Judgment,” Journal of Consumer Research
This research examines how the movements an interface requires of a consumer—that is, its “kinesthetic properties”—can alter what a consumer attends to when responding and, in turn, change the response itself. We compare the kinesthetic properties of two ubiquitous scale formats, slider and radio-button scales. Six studies (plus four in the Web Appendix) show dragging a slider (vs. clicking a radio button) elicits responses that are closer to the scale starting point. This effect occurs because the slider allows participants to engage with the scale as they consider their options. When dragging past each response option, attention is directed to that option, increasing its chance of being selected. Supporting this account, sliders only result in responses closer to the starting point when participants physically drag the cursor across options to their desired response, not when they directly click on it. Furthermore, participants dragging a slider interact with the scale earlier in the judgment process and exhibit a greater visual focus on left-side (vs. right-side) options on the scale compared to participants clicking a radio button. These findings suggest marketers, graphic designers, and researchers should consider how the kinesthetic properties of digital interfaces may shape consumer judgment.
Brucks, Melanie and Olivier Toubia (2025) "Prompt architecture induces methodological artifacts in large language models." PLoS One 20.4: e0319159.
We examine how the seemingly arbitrary way a prompt is posed, which we term “prompt architecture,” influences responses provided by large language models (LLMs). Five large-scale, full-factorial experiments performing standard (zero-shot) similarity evaluation tasks using GPT-3, GPT-4, and Llama 3.1 document how several features of prompt architecture (order, label, framing, and justification) interact to produce methodological artifacts, a form of statistical bias. We find robust evidence that these four elements unduly affect responses across all models, and although we observe differences between GPT-3 and GPT-4, the changes are not necessarily for the better. Specifically, LLMs demonstrate both response-order bias and label bias, and framing and justification moderate these biases. We then test different strategies intended to reduce methodological artifacts. Specifying to the LLM that the order and labels of items have been randomized does not alleviate either response-order or label bias, and the use of uncommon labels reduces (but does not eliminate) label bias but exacerbates response-order bias in GPT-4 (and does not reduce either bias in Llama 3.1). By contrast, aggregating across prompts generated using a full factorial design eliminates response-order and label bias. Overall, these findings highlight the inherent fallibility of any individual prompt when using LLMs, as any prompt contains characteristics that may subtly interact with a multitude of hidden associations embedded in rich language data.
Brucks, Melanie, Rifkin, Jacqueline, and Jeff Johnson (2025), “Video Call Glitches Trigger Uncanniness and Harm Consequential Life Outcomes,” Nature.
People are increasingly using video calls for high-stakes interactions that once required face-to-face contact, from medical consultations, to job interviews, to court proceedings. Yet, videoconferencing introduces a novel communication issue: minor glitches, or intermittent errors in the transmission of audiovisual information during a virtual interaction5. Across five experiments and three supplemental studies using both live and recorded interactions, we show that minor audiovisual glitches during video calls harm interpersonal judgments in consequential life domains (e.g., hiring decisions after a virtual interview or trust in a medical provider after a telehealth visit). Further, two archival datasets from real-world video calls reveal that glitches are associated with both reduced social connection and lower likelihood of being granted criminal parole. We find that audiovisual glitches damage interpersonal judgments because they break the illusion of face-to-face contact (e.g., by distorting faces, misaligning audio and visual cues, making movements appear “choppy”), evoking “uncanniness”—a strange, creepy, or eerie feeling. As the uncanniness of a glitch increases, so does its harm on interpersonal judgments. Further, audiovisual glitches only undermine interpersonal judgments in video calls that simulate face-to-face interaction, demonstrating that the negative effect produced by glitches goes beyond mere disruptiveness, comprehension difficulties, and negative attributions. These findings have critical implications for digital equity. Despite being considered a boon to access, virtual communication may unintentionally perpetuate inequality. As disadvantaged groups have poorer internet connection, they likely experience more glitches, and in turn, worse outcomes in consequential contexts such as health, careers, justice, and social connection.
Technology and Innovation
Brucks, Melanie and Jonathan Levav (2022) “Virtual Communication Curbs Creative Idea Generation,” Nature, 605(7908), 108-112.
COVID-19 accelerated a decade-long shift to remote work by normalizing working from home on a large scale. Indeed, 75% of US employees in a 2021 survey reported a personal preference for working remotely at least one day per week1, and studies estimate that 20% of US workdays will take place at home after the pandemic ends2. Here we examine how this shift away from in-person interaction affects innovation, which relies on collaborative idea generation as the foundation of commercial and scientific progress3. In a laboratory study and a field experiment across five countries (in Europe, the Middle East and South Asia), we show that videoconferencing inhibits the production of creative ideas. By contrast, when it comes to selecting which idea to pursue, we find no evidence that videoconferencing groups are less effective (and preliminary evidence that they may be more effective) than in-person groups. Departing from previous theories that focus on how oral and written technologies limit the synchronicity and extent of information exchanged4–6, we find that our effects are driven by differences in the physical nature of videoconferencing and in-person interactions. Specifically, using eye-gaze and recall measures, as well as latent semantic analysis, we demonstrate that videoconferencing hampers idea generation because it focuses communicators on a screen, which prompts a narrower cognitive focus. Our results suggest that virtual interaction comes with a cognitive cost for creative idea generation.
Innovation
Brucks, Melanie and Szu-chi Huang (2020), “Does Practice Make Perfect? The Contrasting Effects of Repeated Practice on Creativity,” Journal of the Association for Consumer Research 5(3), 291-301.
Could repeatedly “exercising” the creativity muscle help build up creative performance over time? To answer this question, we conducted three longitudinal studies with a total of 830 participants, resulting in the generation of 17,652 creative solutions and 39,211 unique product names. In study 1 (and its replication), we uncovered contrasting effects of practice on creativity. While daily practice fostered convergent creativity, divergent creativity showed mixed effects—the number of unique ideas generated remained stagnant, and the average novelty of ideas decreased with practice. In study 2, we found that repeated practice affected divergent creativity through two opposing forces: practice hindered the activation of less routinized routes (and thus hurt the number of unique ideas generated) but also positively affected the fluency of the routinized routes (and thus increased the number of unique ideas generated). Interestingly, the vast majority of participants inaccurately predicted that repeated practice would uniformly facilitate both types of creativity.
Other Work
Kupor, Daniella, Melanie Brucks and Szu-chi Huang, “And the Winner is…? Forecasting the Outcome of Others’ Competitive Efforts,” under review at Journal of Personality and Social Psychology.
People frequently forecast the outcomes of competitive events. Some forecasts are about oneself (e.g., forecasting how one will perform in an athletic competition, school or job application, or professional contest), while many other forecasts are about others (e.g., predicting the outcome of another individual’s athletic competition, school or job application, or professional contest). In this research, we examine people’s forecasts about others’ competitive outcomes, illuminate a systematic bias in these forecasts, and document the source of this bias as well as its downstream consequences. Six experiments with a total of 1,643 participants in a variety of competitive contexts demonstrate that when observers forecast the outcomes that another individual will experience, they systematically overestimate the probability that this person will win. Importantly, this misprediction stems from a previously undocumented lay belief—the belief that other people generally achieve their intentions—which skews observers’ hypothesis testing. We find that this lay belief biases people’s predictions even in contexts in which the contestant’s intent is unlikely to generate the desired outcome, and even when forecasters are directly incentivized to be accurate.
Carey, Angela L., Melanie Brucks, Albrecht C.P. Kufner, Nicholas Holtzman, Fenne große Deters, Mitja D. Back, M. Brent Donnellan, James W. Pennebaker, and Matthias R. Mehl (2015), “Narcissism and the Use of Personal Pronouns: Revisited,” Journal of Personality and Social Psychology, 109(3), e1–e15.
Among both laypersons and researchers, extensive use of first-person singular pronouns (i.e., I-talk) is considered a face-valid linguistic marker of narcissism. However, the assumed relation between narcissism and I-talk has yet to be subjected to a strong empirical test. Accordingly, we conducted a large-scale (N = 4,811), multisite (5 labs), multimeasure (5 narcissism measures) and dual-language (English and German) investigation to quantify how strongly narcissism is related to using more first-person singular pronouns across different theoretically relevant communication contexts (identity-related, personal, impersonal, private, public, and stream-of-consciousness tasks). Overall (r = .02, 95% CI [−.02, .04]) and within the sampled contexts, narcissism was unrelated to use of first-person singular pronouns (total, subjective, objective, and possessive). This consistent near-zero effect has important implications for making inferences about narcissism from pronoun use and prompts questions about why I-talk tends to be strongly perceived as an indicator of narcissism in the absence of an underlying actual association between the 2 variables.
Huang, Szu-chi, Melanie Brucks, Jaehwan Song, Margaret C. Campbell (2024). Beyond achievement: Transformation mindset enhances authenticity after goal success. Motivation Science, 10(3), 171–181
In this article, we discuss three distinct mindsets that can change how people think about goal success: Thinking about goal success as (a) moving away from the old goal-unattained state; (b) arriving at the new goal-attained state; or (c) a transformation from the old to the new state. We review prior literature and new empirical findings and conclude that a transformation mindset leads to the highest feeling of authenticity and the lowest feeling of fragility after successfully attaining a goal. Importantly, this heightened feeling of authenticity and decreased sense of fragility contribute to positive social and individual learning behaviors, such as a greater desire to share goal-relevant information with others and interest in goal-maintenance behaviors. We end by underscoring the implications of developing a transformation mindset on human motivation, identity integration, intervention designs, and continuous self-development in contexts like education, health, and business.